dlk.core.imodels package

Submodules

dlk.core.imodels.basic module

class dlk.core.imodels.basic.BasicIModel(config: dlk.core.imodels.basic.BasicIModelConfig, checkpoint=False)[source]

Bases: pytorch_lightning.core.lightning.LightningModule, dlk.core.imodels.GatherOutputMixin

configure_optimizers()[source]

Configure the optimizer and scheduler

property epoch_training_steps: int

every epoch training steps inferred from datamodule and devices.

forward(inputs: Dict[str, torch.Tensor]) Dict[str, torch.Tensor][source]

do forward on a mini batch

Parameters

batch – a mini batch inputs

Returns

the outputs

get_progress_bar_dict()[source]

rewrite the prograss_bar_dict, remove the ‘v_num’ which we don’t need

Returns

progress_bar dict

property num_training_epochs: int

Total training epochs inferred from datamodule and devices.

property num_training_steps: int

Total training steps inferred from datamodule and devices.

predict_step(batch: Dict, batch_idx: int) Dict[source]

do predict on a mini batch

Parameters
  • batch – a mini batch inputs

  • batch_idx – the index(dataloader) of the mini batch

Returns

the outputs

test_epoch_end(outputs: List[Dict]) List[Dict][source]

Gather the outputs of all node and do postprocess on it.

Parameters

outputs – current node returnd output list

Returns

all node outputs

test_step(batch: Dict[str, torch.Tensor], batch_idx: int) Dict[source]

do test on a mini batch

The outputs only gather the keys in self.gather_data.keys for postprocess :param batch: a mini batch inputs :param batch_idx: the index(dataloader) of the mini batch

Returns

the outputs

training: bool
training_step(batch: Dict[str, torch.Tensor], batch_idx: int)[source]

do training_step on a mini batch

Parameters
  • batch – a mini batch inputs

  • batch_idx – the index(dataloader) of the mini batch

Returns

the outputs

validation_epoch_end(outputs: List[Dict]) List[Dict][source]

Gather the outputs of all node and do postprocess on it.

The outputs only gather the keys in self.gather_data.keys for postprocess :param outputs: current node returnd output list

Returns

all node outputs

validation_step(batch: Dict[str, torch.Tensor], batch_idx: int) Dict[str, torch.Tensor][source]

do validation on a mini batch

The outputs only gather the keys in self.gather_data.keys for postprocess :param batch: a mini batch inputs :param batch_idx: the index(dataloader) of the mini batch

Returns

the outputs

class dlk.core.imodels.basic.BasicIModelConfig(config: Dict)[source]

Bases: dlk.utils.config.BaseConfig

basic imodel config will provide all the config for model/optimizer/loss/scheduler/postprocess

get_loss(config: Dict)[source]

Use config to init the loss

Parameters

config – loss config

Returns

Loss, LossConfig

get_model(config: Dict)[source]

Use config to init the model

Parameters

config – model config

Returns

Model, ModelConfig

get_optimizer(config: Dict)[source]

Use config to init the optimizer

Parameters

config – optimizer config

Returns

Optimizer, OptimizerConfig

get_postprocessor(config: Dict)[source]

Use config to init the postprocessor

Parameters

config – postprocess config

Returns

PostProcess, PostProcessConfig

get_scheduler(config: Dict)[source]

Use config to init the scheduler

Parameters

config – scheduler config

Returns

Scheduler, SchedulerConfig

dlk.core.imodels.distill module

Module contents

imodels

class dlk.core.imodels.GatherOutputMixin[source]

Bases: object

gather all the small batches output to a big batch

concat_list_of_dict_outputs(outputs: List[Dict]) Dict[source]

only support all the outputs has the same dim, now is deprecated.

Parameters

outputs – multi node returned output (list of dict)

Returns

Concat all list by name

gather_outputs(outputs: List[Dict])[source]

gather the dist outputs

Parameters

outputs – one node outputs

Returns

all outputs

static proc_dist_outputs(dist_outputs: List[Dict]) List[Dict][source]

gather all distributed outputs to outputs which is like in a single worker.

Parameters

dist_outputs – the inputs of pytorch_lightning train/test/.._epoch_end when using ddp

Returns

the inputs of pytorch_lightning train/test/.._epoch_end when only run on one worker.

dlk.core.imodels.import_imodels(imodels_dir, namespace)[source]