dlk.core.modules package
Submodules
dlk.core.modules.bert module
- class dlk.core.modules.bert.BertWrap(config: dlk.core.modules.bert.BertWrapConfig)[source]
Bases:
dlk.core.modules.Module
Bert wrap
- forward(inputs: Dict)[source]
do forward on a mini batch
- Parameters
batch – a mini batch inputs
- Returns
sequence_output, all_hidden_states, all_self_attentions
- init_weight(method)[source]
init the weight of model by ‘bert.init_weight()’ or from_pretrain
- Parameters
method – init method, no use for pretrained_transformers
- Returns
None
- training: bool
- class dlk.core.modules.bert.BertWrapConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for BertWrap
- Config Example:
>>> { >>> "config": { >>> "pretrained_model_path": "*@*", >>> "from_pretrain": true, >>> "freeze": false, >>> "dropout": 0.0, >>> }, >>> "_name": "bert", >>> }
dlk.core.modules.conv1d module
- class dlk.core.modules.conv1d.Conv1d(config: dlk.core.modules.conv1d.Conv1dConfig)[source]
Bases:
dlk.core.modules.Module
Conv for 1d input
- forward(x: torch.Tensor)[source]
do forward on a mini batch
- Parameters
batch – a mini batch inputs
- Returns
conv result the shape is the same as input
- training: bool
- class dlk.core.modules.conv1d.Conv1dConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for Conv1d
- Config Example:
>>> { >>> "config": { >>> "in_channels": "*@*", >>> "out_channels": "*@*", >>> "dropout": 0.0, >>> "kernel_sizes": [3], >>> }, >>> "_name": "conv1d", >>> }
dlk.core.modules.crf module
- class dlk.core.modules.crf.CRFConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for ConditionalRandomField
- Config Example:
>>> { >>> "config": { >>> "output_size": 2, >>> "batch_first": true, >>> "reduction": "mean", //none|sum|mean|token_mean >>> }, >>> "_name": "crf", >>> }
- class dlk.core.modules.crf.ConditionalRandomField(config: dlk.core.modules.crf.CRFConfig)[source]
Bases:
dlk.core.modules.Module
CRF, training_step for training, forward for decode。
- forward(logits: torch.FloatTensor, mask: torch.LongTensor)[source]
predict step, get the best path
- Parameters
logits – emissions, batch_size*max_len*num_tags
mask – batch_size*max_len, mask==0 means padding
- Returns
batch*max_len
- init_weight(method: Callable)[source]
init the weight of transitions, start_transitions and end_transitions
Initialize the transition parameters. The parameters will be initialized randomly from a uniform distribution between -0.1 and 0.1.
- Parameters
method – init method, no use
- Returns
None
- training: bool
dlk.core.modules.distil_bert module
- class dlk.core.modules.distil_bert.DistilBertWrap(config: dlk.core.modules.distil_bert.DistilBertWrapConfig)[source]
Bases:
dlk.core.modules.Module
DistillBertWrap
- forward(inputs)[source]
do forward on a mini batch
- Parameters
batch – a mini batch inputs
- Returns
sequence_output, all_hidden_states, all_self_attentions
- init_weight(method)[source]
init the weight of model by ‘bert.init_weight()’ or from_pretrain
- Parameters
method – init method, no use for pretrained_transformers
- Returns
None
- training: bool
- class dlk.core.modules.distil_bert.DistilBertWrapConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for DistilBertWrap
- Config Example:
>>> { >>> "config": { >>> "pretrained_model_path": "*@*", >>> "from_pretrain": true, >>> "freeze": false, >>> "dropout": 0.0, >>> },
>>> "_name": "distil_bert", >>> }
dlk.core.modules.linear module
- class dlk.core.modules.linear.Linear(config: dlk.core.modules.linear.LinearConfig)[source]
Bases:
dlk.core.modules.Module
wrap for nn.Linear
- forward(input: torch.Tensor) torch.Tensor [source]
do forward on a mini batch
- Parameters
batch – a mini batch inputs
- Returns
project result the shape is the same as input(no poll), otherwise depend on poll method
- training: bool
- class dlk.core.modules.linear.LinearConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for Linear
- Config Example:
>>> { >>> "config": { >>> "input_size": 256, >>> "output_size": 2, >>> "dropout": 0.0, //the module output no need dropout >>> "bias": true, // use bias or not in linear , if set to false, all the bias will be set to 0 >>> "pool": null, // pooling output or not >>> }, >>> "_name": "linear", >>> }
dlk.core.modules.logits_gather module
- class dlk.core.modules.logits_gather.LogitsGather(config: dlk.core.modules.logits_gather.LogitsGatherConfig)[source]
Bases:
dlk.core.modules.Module
Gather the output logits decided by config
- forward(input: List[torch.Tensor]) Dict[str, torch.Tensor] [source]
gather the needed input to dict
- Parameters
batch – a mini batch inputs
- Returns
some elements to dict
- training: bool
- class dlk.core.modules.logits_gather.LogitsGatherConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for LogitsGather
- Config Example:
>>> { >>> "config": { >>> "gather_layer": { >>> "0": { >>> "map": "3", // the 0th layer not do scale output to "gather_logits_3", "gather_logits_" is the output name prefix, the "3" is map name >>> "scale": {} //don't scale >>> }, >>> "1": { >>> "map": "4", // the 1th layer scale output dim from 1024 to 200 and the output named "gather_logits_3" >>> "scale": {"1024":"200"}, >>> } >>> }, >>> "prefix": "gather_logits_", >>> }, >>> _name: "logits_gather", >>> }
dlk.core.modules.lstm module
- class dlk.core.modules.lstm.LSTM(config: dlk.core.modules.lstm.LSTMConfig)[source]
Bases:
dlk.core.modules.Module
A wrap for nn.LSTM
- forward(input: torch.Tensor, mask: torch.Tensor) torch.Tensor [source]
do forward on a mini batch
- Parameters
batch – a mini batch inputs
- Returns
lstm output the shape is the same as input
- training: bool
- class dlk.core.modules.lstm.LSTMConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for def
- Config Example:
>>> { >>> "config": { >>> "bidirectional": true, >>> "output_size": 200, //the output is 2*hidden_size if use >>> "input_size": 200, >>> "num_layers": 1, >>> "dropout": 0.1, // dropout between layers >>> "dropout_last": true, //dropout the last layer output or not >>> }, >>> "_name": "lstm", >>> }
dlk.core.modules.roberta module
- class dlk.core.modules.roberta.RobertaWrap(config: dlk.core.modules.roberta.RobertaWrapConfig)[source]
Bases:
dlk.core.modules.Module
Roberta Wrap
- forward(inputs)[source]
do forward on a mini batch
- Parameters
batch – a mini batch inputs
- Returns
sequence_output, all_hidden_states, all_self_attentions
- init_weight(method)[source]
init the weight of model by ‘bert.init_weight()’ or from_pretrain
- Parameters
method – init method, no use for pretrained_transformers
- Returns
None
- training: bool
- class dlk.core.modules.roberta.RobertaWrapConfig(config: Dict)[source]
Bases:
dlk.utils.config.BaseConfig
Config for RobertaWrap
- Config Example:
>>> { >>> "config": { >>> "pretrained_model_path": "*@*", >>> "from_pretrain": true >>> "freeze": false, >>> "dropout": 0.0, >>> }, >>> "_name": "roberta", >>> }
Module contents
basic modules
- class dlk.core.modules.Module[source]
Bases:
torch.nn.modules.module.Module
This class is means DLK Module for replace the torch.nn.Module in this project
- forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
in simple module, all step fit to this method
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- init_weight(method)[source]
init the weight of submodules by ‘method’
- Parameters
method – init method
- Returns
None
- predict_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do predict for one batch
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- test_step(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]
do test for one batch
- Parameters
inputs – one mini-batch inputs
- Returns
one mini-batch outputs
- training: bool