dlk.core package
Subpackages
- dlk.core.callbacks package
- dlk.core.imodels package
- dlk.core.initmethods package
- dlk.core.layers package
- Subpackages
- dlk.core.layers.decoders package
- dlk.core.layers.embeddings package
- Submodules
- dlk.core.layers.embeddings.combine_word_char_cnn module
- dlk.core.layers.embeddings.identity module
- dlk.core.layers.embeddings.pretrained_transformers module
- dlk.core.layers.embeddings.random module
- dlk.core.layers.embeddings.static module
- dlk.core.layers.embeddings.static_char_cnn module
- Module contents
- dlk.core.layers.encoders package
- Module contents
- Subpackages
- dlk.core.losses package
- dlk.core.models package
- dlk.core.modules package
- dlk.core.optimizers package
- dlk.core.schedulers package
Submodules
dlk.core.base_module module
- class dlk.core.base_module.BaseModel[source]
Bases:
torch.nn.modules.module.Module
,dlk.core.base_module.ModuleOutputRenameMixin
,dlk.core.base_module.IModuleIO
,dlk.core.base_module.IModuleStep
All pytorch models should inheritance this class
- forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
all models should apply this method
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- training: bool
- class dlk.core.base_module.BaseModule(config: dlk.core.base_module.BaseModuleConfig)[source]
Bases:
torch.nn.modules.module.Module
,dlk.core.base_module.ModuleOutputRenameMixin
,dlk.core.base_module.IModuleIO
,dlk.core.base_module.IModuleStep
All pytorch modules should inheritance this class
- check_keys_are_provided(provide: Set[str]) None [source]
check this module required key are provided
- Returns
pass or not
- forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
all module should apply this method
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- init_weight(method)[source]
init the weight of submodules by ‘method’
- Parameters
method – init method
- Returns
None
- training: bool
- class dlk.core.base_module.BaseModuleConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
docstring for BaseLayerConfig
- class dlk.core.base_module.IModuleIO[source]
Bases:
object
interface for check the modules input and output
- abstract check_keys_are_provided(provide: List[str]) bool [source]
check this module required key are provided
- Returns
pass or not
- check_module_chain(module_list: List[dlk.core.base_module.BaseModule]) bool [source]
check the interfaces of the list of modules are alignd or not.
- Parameters
module_list – a series modules
- Returns
pass or not
- Raises
ValueError – the check is not passed
- class dlk.core.base_module.IModuleStep[source]
Bases:
object
docstring for ModuleStepMixin
- abstract predict_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do predict for one batch
- Parameters
inputs – one mini-batch inputs
- Returns
the predicts outputs
- test_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do test for one batch
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- class dlk.core.base_module.ModuleOutputRenameMixin[source]
Bases:
object
Just rename the output key name by config to adapt the input field of downstream module.
- dict_rename(input: Dict, output_map: Dict[str, str]) Dict [source]
rename the key of input(dict) by output_map(name map)
- Parameters
input – will rename input
output_map – name map
- Returns
renamed input
- get_input_name(name: str) str [source]
use config._input_map map the name to real name
- Parameters
name – input_name
- Returns
real_name
- get_output_name(name: str) str [source]
use config._output_map map the name to real name
- Parameters
name – output_name
- Returns
real_name
- class dlk.core.base_module.SimpleModule(config: dlk.core.base_module.BaseModuleConfig)[source]
Bases:
dlk.core.base_module.BaseModule
docstring for SimpleModule, SimpleModule, all train/predict/test/validation step call the forward
- forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
in simple module, all step fit to this method
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- predict_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do predict for one batch
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- test_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do test for one batch
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- training: bool