dlk.core.callbacks package
Submodules
dlk.core.callbacks.checkpoint module
- class dlk.core.callbacks.checkpoint.CheckpointCallback(config: dlk.core.callbacks.checkpoint.CheckpointCallbackConfig)[source]
Bases:
object
Save checkpoint decided by config
- class dlk.core.callbacks.checkpoint.CheckpointCallbackConfig(config: Dict)[source]
Bases:
object
Config for CheckpointCallback
- Config Example:
>>> { >>> // default checkpoint configure >>> "_name": "checkpoint", >>> "config": { >>> "monitor": "*@*", // monitor which metrics or log value >>> "save_top_k": 3, //save top k >>> "mode": "*@*", //"max" or "min" select topk min or max checkpoint, min for loss, max for acc >>> "save_last": true, // always save last checkpoint >>> "auto_insert_metric_name": true, //the save file name with or not metric name >>> "every_n_train_steps": null, // Number of training steps between checkpoints. >>> "every_n_epochs": 1, //Number of epochs between checkpoints. >>> "save_on_train_epoch_end": false,// Whether to run checkpointing at the end of the training epoch. If this is False, then the check runs at the end of the validation. >>> "save_weights_only": false, //whether save other status like optimizer, etc. >>> } >>> }
dlk.core.callbacks.early_stop module
- class dlk.core.callbacks.early_stop.EarlyStoppingCallback(config: dlk.core.callbacks.early_stop.EarlyStoppingCallbackConfig)[source]
Bases:
object
Early stop decided by config
- class dlk.core.callbacks.early_stop.EarlyStoppingCallbackConfig(config: Dict)[source]
Bases:
object
Config for EarlyStoppingCallback
- Config Example:
>>> { >>> "_name": "early_stop", >>> "config":{ >>> "monitor": "val_loss", >>> "mode": "*@*", // min or max, min for the monitor is loss, max for the monitor is acc, f1, etc. >>> "patience": 3, >>> "min_delta": 0.0, >>> "check_on_train_epoch_end": null, >>> "strict": true, // if the monitor is not right, raise error >>> "stopping_threshold": null, // float, if the value is good enough, stop >>> "divergence_threshold": null, // float, if the value is so bad, stop >>> "verbose": true, //verbose mode print more info >>> } >>> }
dlk.core.callbacks.lr_monitor module
- class dlk.core.callbacks.lr_monitor.LearningRateMonitorCallback(config: dlk.core.callbacks.lr_monitor.LearningRateMonitorCallbackConfig)[source]
Bases:
object
Monitor the learning rate
- class dlk.core.callbacks.lr_monitor.LearningRateMonitorCallbackConfig(config: Dict)[source]
Bases:
object
Config for LearningRateMonitorCallback
- Config Example:
>>> { >>> "_name": "lr_monitor", >>> "config": { >>> "logging_interval": null, // set to None to log at individual interval according to the interval key of each scheduler. other value : step, epoch >>> "log_momentum": true, // log momentum or not >>> } >>> }
dlk.core.callbacks.weight_average module
- class dlk.core.callbacks.weight_average.StochasticWeightAveragingCallback(config: dlk.core.callbacks.weight_average.StochasticWeightAveragingCallbackConfig)[source]
Bases:
object
Average weight by config
- class dlk.core.callbacks.weight_average.StochasticWeightAveragingCallbackConfig(config)[source]
Bases:
object
Config for StochasticWeightAveragingCallback
- Config Example:
>>> { //weight_average default >>> "_name": "weight_average", >>> "config": { >>> "swa_epoch_start": 0.8, // swa start epoch >>> "swa_lrs": null, >>> //None. Use the current learning rate of the optimizer at the time the SWA procedure starts. >>> //float. Use this value for all parameter groups of the optimizer. >>> //List[float]. A list values for each parameter group of the optimizer. >>> "annealing_epochs": 10, >>> "annealing_strategy": 'cos', >>> "device": null, // save device, null for auto detach, if the gpu is oom, you should change this to 'cpu' >>> } >>> }
Module contents
callbacks