dlk.core.optimizers package

Submodules

dlk.core.optimizers.adamw module

class dlk.core.optimizers.adamw.AdamWOptimizer(model: torch.nn.modules.module.Module, config: dlk.core.optimizers.adamw.AdamWOptimizerConfig)[source]

Bases: dlk.core.optimizers.BaseOptimizer

Wrap for optim.AdamW

get_optimizer() torch.optim.adamw.AdamW[source]

return the initialized AdamW optimizer

Returns

AdamW Optimizer

class dlk.core.optimizers.adamw.AdamWOptimizerConfig(config: Dict)[source]

Bases: dlk.utils.config.BaseConfig

Config for AdamWOptimizer

Config Example:
>>> {
>>>     "config": {
>>>         "lr": 5e-5,
>>>         "betas": [0.9, 0.999],
>>>         "eps": 1e-6,
>>>         "weight_decay": 1e-2,
>>>         "optimizer_special_groups": {
>>>             "order": ['decoder', 'bias'], // the group order, if the para is in decoder & is in bias, set to decoder. The order name is set to the group name
>>>             "bias": {
>>>                 "config": {
>>>                     "weight_decay": 0
>>>                 },
>>>                 "pattern": ["bias",  "LayerNorm.bias", "LayerNorm.weight"]
>>>             },
>>>             "decoder": {
>>>                 "config": {
>>>                     "lr": 1e-3
>>>                 },
>>>                 "pattern": ["decoder"]
>>>             },
>>>         }
>>>         "name": "default" // default group name
>>>     },
>>>     "_name": "adamw",
>>> }

dlk.core.optimizers.sgd module

class dlk.core.optimizers.sgd.SGDOptimizer(model: torch.nn.modules.module.Module, config: dlk.core.optimizers.sgd.SGDOptimizerConfig)[source]

Bases: dlk.core.optimizers.BaseOptimizer

wrap for optim.SGD

get_optimizer() torch.optim.sgd.SGD[source]

return the initialized SGD optimizer

Returns

SGD Optimizer

class dlk.core.optimizers.sgd.SGDOptimizerConfig(config: Dict)[source]

Bases: dlk.utils.config.BaseConfig

Config for SGDOptimizer

Config Example:
>>> {
>>>     "config": {
>>>         "lr": 1e-3,
>>>         "momentum": 0.9,
>>>         "dampening": 0,
>>>         "weight_decay": 0,
>>>         "nesterov":false,
>>>         "optimizer_special_groups": {
>>>             // "order": ['decoder', 'bias'], // the group order, if the para is in decoder & is in bias, set to decoder. The order name is set to the group name
>>>             // "bias": {
>>>             //     "config": {
>>>             //         "weight_decay": 0
>>>             //     },
>>>             //     "pattern": ["bias",  "LayerNorm.bias", "LayerNorm.weight"]
>>>             // },
>>>             // "decoder": {
>>>             //     "config": {
>>>             //         "lr": 1e-3
>>>             //     },
>>>             //     "pattern": ["decoder"]
>>>             // },
>>>         }
>>>         "name": "default" // default group name
>>>     },
>>>     "_name": "sgd",
>>> }

Module contents

optimizers

class dlk.core.optimizers.BaseOptimizer[source]

Bases: object

get_optimizer() torch.optim.optimizer.Optimizer[source]

return the initialized optimizer

Returns

Optimizer

init_optimizer(optimizer: torch.optim.optimizer.Optimizer, model: torch.nn.modules.module.Module, config: Dict)[source]

init the optimizer for paras in model, and the group is decided by config

Parameters
  • optimizer – adamw, sgd, etc.

  • model – pytorch model

  • config – which decided the para group, lr, etc.

Returns

the initialized optimizer

dlk.core.optimizers.import_optimizers(optimizers_dir, namespace)[source]