dlk.core.models package
Submodules
dlk.core.models.basic module
- class dlk.core.models.basic.BasicModel(config: dlk.core.models.basic.BasicModelConfig, checkpoint)[source]
Bases:
dlk.core.base_module.BaseModel
Basic & General Model
- check_keys_are_provided(provide: List[str] = []) None [source]
check this all the submodules required key are provided
Returns: None
Raises: PermissionError
- forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do forward on a mini batch
- Parameters
batch – a mini batch inputs
Returns: the outputs
- predict_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do predict for one batch
- Parameters
inputs – one mini-batch inputs
Returns: the predicts outputs
- provide_keys() List[str] [source]
return all keys of the dict of the model returned
This method may no use, so we will remove this.
Returns: all keys
- test_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do test for one batch
- Parameters
inputs – one mini-batch inputs
Returns: the test outputs
- training: bool
- class dlk.core.models.basic.BasicModelConfig(config)[source]
Bases:
dlk.utils.config.BaseConfig
Config for BasicModel
- Config Example:
>>> { >>> embedding: { >>> _base: "static" >>> config: { >>> embedding_file: "*@*", //the embedding file, must be saved as numpy array by pickle >>> embedding_dim: "*@*", >>> //if the embedding_file is a dict, you should provide the dict trace to embedding >>> embedding_trace: ".", //default the file itself is the embedding >>> /*embedding_trace: "embedding", //this means the <embedding = pickle.load(embedding_file)["embedding"]>*/ >>> /*embedding_trace: "meta.embedding", //this means the <embedding = pickle.load(embedding_file)['meta']["embedding"]>*/ >>> freeze: false, // is freeze >>> dropout: 0, //dropout rate >>> output_map: {}, >>> }, >>> }, >>> decoder: { >>> _base: "linear", >>> config: { >>> input_size: "*@*", >>> output_size: "*@*", >>> pool: null, >>> dropout: "*@*", //the decoder output no need dropout >>> output_map: {} >>> }, >>> }, >>> encoder: { >>> _base: "lstm", >>> config: { >>> output_map: {}, >>> hidden_size: "*@*", >>> input_size: *@*, >>> output_size: "*@*", >>> num_layers: 1, >>> dropout: "*@*", // dropout between layers >>> }, >>> }, >>> "initmethod": { >>> "_base": "range_norm" >>> }, >>> "config": { >>> "embedding_dim": "*@*", >>> "dropout": "*@*", >>> "embedding_file": "*@*", >>> "embedding_trace": "token_embedding", >>> }, >>> _link: { >>> "config.embedding_dim": ["embedding.config.embedding_dim", >>> "encoder.config.input_size", >>> "encoder.config.output_size", >>> "encoder.config.hidden_size", >>> "decoder.config.output_size", >>> "decoder.config.input_size" >>> ], >>> "config.dropout": ["encoder.config.dropout", "decoder.config.dropout", "embedding.config.dropout"], >>> "config.embedding_file": ['embedding.config.embedding_file'], >>> "config.embedding_trace": ['embedding.config.embedding_trace'] >>> } >>> _name: "basic" >>> }
- get_decoder(config)[source]
return the Decoder and DecoderConfig
- Parameters
config – the decoder config
- Returns
Decoder, DecoderConfig
- get_embedding(config: Dict)[source]
return the Embedding and EmbeddingConfig
- Parameters
config – the embedding config
- Returns
Embedding, EmbeddingConfig
Module contents
models