dlk.core.layers.encoders package

Submodules

dlk.core.layers.encoders.identity module

class dlk.core.layers.encoders.identity.IdentityEncoder(config: dlk.core.layers.encoders.identity.IdentityEncoderConfig)[source]

Bases: dlk.core.base_module.SimpleModule

Do nothing

forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor][source]

return inputs

Parameters

inputs – anything

Returns

inputs

training: bool
class dlk.core.layers.encoders.identity.IdentityEncoderConfig(config)[source]

Bases: dlk.core.base_module.BaseModuleConfig

Config for IdentityEncoder

Config Example:
>>> {
>>>     "config": {
>>>     },
>>>     "_name": "identity",
>>> }

dlk.core.layers.encoders.linear module

class dlk.core.layers.encoders.linear.Linear(config: dlk.core.layers.encoders.linear.LinearConfig)[source]

Bases: dlk.core.base_module.SimpleModule

wrap for torch.nn.Linear

forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor][source]

All step do this

Parameters

inputs – one mini-batch inputs

Returns

one mini-batch outputs

init_weight(method: Callable)[source]

init the weight of submodules by ‘method’

Parameters

method – init method

Returns

None

training: bool
class dlk.core.layers.encoders.linear.LinearConfig(config: Dict)[source]

Bases: dlk.core.base_module.BaseModuleConfig

Config for Linear

Config Example:
>>> {
>>>     "module": {
>>>         "_base": "linear",
>>>     },
>>>     "config": {
>>>         "input_size": "*@*",
>>>         "output_size": "*@*",
>>>         "pool": null,
>>>         "dropout": 0.0,
>>>         "output_map": {},
>>>         "input_map": {}, // required_key: provide_key
>>>     },
>>>     "_link":{
>>>         "config.input_size": ["module.config.input_size"],
>>>         "config.output_size": ["module.config.output_size"],
>>>         "config.pool": ["module.config.pool"],
>>>     },
>>>     "_name": "linear",
>>> }

dlk.core.layers.encoders.lstm module

class dlk.core.layers.encoders.lstm.LSTM(config: dlk.core.layers.encoders.lstm.LSTMConfig)[source]

Bases: dlk.core.base_module.SimpleModule

Wrap for torch.nn.LSTM

forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor][source]

All step do this

Parameters

inputs – one mini-batch inputs

Returns

one mini-batch outputs

init_weight(method: Callable)[source]

init the weight of submodules by ‘method’

Parameters

method – init method

Returns

None

training: bool
class dlk.core.layers.encoders.lstm.LSTMConfig(config: Dict)[source]

Bases: dlk.core.base_module.BaseModuleConfig

Config for LSTM

Config Example:
>>> {
>>>     module: {
>>>         _base: "lstm",
>>>     },
>>>     config: {
>>>         input_map: {},
>>>         output_map: {},
>>>         input_size: *@*,
>>>         output_size: "*@*",
>>>         num_layers: 1,
>>>         dropout: "*@*", // dropout between layers
>>>     },
>>>     _link: {
>>>         config.input_size: [module.config.input_size],
>>>         config.output_size: [module.config.output_size],
>>>         config.dropout: [module.config.dropout],
>>>     },
>>>     _name: "lstm",
>>> }

Module contents

encoders

dlk.core.layers.encoders.import_encoders(encoders_dir, namespace)[source]