dlk.core.losses package
Submodules
dlk.core.losses.bce module
- class dlk.core.losses.bce.BCEWithLogitsLoss(config: dlk.core.losses.bce.BCEWithLogitsLossConfig)[source]
Bases:
object
binary crossentropy for bi-class classification
- calc(result, inputs, rt_config)[source]
calc the loss the predict is from result, the ground truth is from inputs
- Parameters
result – the model predict dict
inputs – the all inputs for model
rt_config – provide the current training status >>> { >>> “current_step”: self.global_step, >>> “current_epoch”: self.current_epoch, >>> “total_steps”: self.num_training_steps, >>> “total_epochs”: self.num_training_epochs >>> }
- Returns
loss
- update_config(rt_config: Dict)[source]
callback for imodel to update the total steps and epochs
when init the loss module, the total step and epoch is not known, when all data ready, the imodel update the value for loss module
- Parameters
rt_config – { “total_steps”: self.num_training_steps, “total_epochs”: self.num_training_epochs}
- Returns
None
- class dlk.core.losses.bce.BCEWithLogitsLossConfig(config: Dict)[source]
Bases:
dlk.core.base_module.BaseModuleConfig
Config for BCEWithLogitsLoss
- Config Example:
>>> { >>> "config": { >>> "pred_truth_pair": [], # len(.) == 2, the 1st is the pred_name, 2nd is truth_name in __call__ inputs >>> "schedule": [1], >>> "masked_select": null, // if provide, only select the masked(=1) data >>> "scale": [1], # scale the loss for every schedule stage >>> // "schdeule": [0.3, 1.0], # can be a list or str >>> // "scale": "[0.5, 1]", >>> }, >>> "_name": "bce", >>> }
dlk.core.losses.cross_entropy module
- class dlk.core.losses.cross_entropy.CrossEntropyLoss(config: dlk.core.losses.cross_entropy.CrossEntropyLossConfig)[source]
Bases:
object
for multi class classification
- calc(result, inputs, rt_config)[source]
calc the loss the predict is from result, the ground truth is from inputs
- Parameters
result – the model predict dict
inputs – the all inputs for model
rt_config – provide the current training status >>> { >>> “current_step”: self.global_step, >>> “current_epoch”: self.current_epoch, >>> “total_steps”: self.num_training_steps, >>> “total_epochs”: self.num_training_epochs >>> }
- Returns
loss
- update_config(rt_config)[source]
callback for imodel to update the total steps and epochs
when init the loss module, the total step and epoch is not known, when all data ready, the imodel update the value for loss module
- Parameters
rt_config – { “total_steps”: self.num_training_steps, “total_epochs”: self.num_training_epochs}
- Returns
None
- class dlk.core.losses.cross_entropy.CrossEntropyLossConfig(config: Dict)[source]
Bases:
dlk.core.base_module.BaseModuleConfig
Config for CrossEntropyLoss
- Config Example:
>>> { >>> "config": { >>> "ignore_index": -1, >>> "weight": null, # or a list of value for every class >>> "label_smoothing": 0.0, # torch>=1.10 >>> "pred_truth_pair": [], # len(.) == 2, the 1st is the pred_name, 2nd is truth_name in __call__ inputs >>> "schedule": [1], >>> "scale": [1], # scale the loss for every schedule stage >>> // "schdeule": [0.3, 1.0], # can be a list or str >>> // "scale": "[0.5, 1]", >>> }, >>> "_name": "cross_entropy", >>> }
dlk.core.losses.identity module
- class dlk.core.losses.identity.IdentityLoss(config: dlk.core.losses.identity.IdentityLossConfig)[source]
Bases:
object
gather the loss and return when the loss is calc previor module like crf
- calc(result, inputs, rt_config)[source]
calc the loss the predict is from result, the ground truth is from inputs
- Parameters
result – the model predict dict
inputs – the all inputs for model
rt_config – provide the current training status >>> { >>> “current_step”: self.global_step, >>> “current_epoch”: self.current_epoch, >>> “total_steps”: self.num_training_steps, >>> “total_epochs”: self.num_training_epochs >>> }
- Returns
loss
- update_config(rt_config)[source]
callback for imodel to update the total steps and epochs
when init the loss module, the total step and epoch is not known, when all data ready, the imodel update the value for loss module
- Parameters
rt_config – { “total_steps”: self.num_training_steps, “total_epochs”: self.num_training_epochs}
- Returns
None
- class dlk.core.losses.identity.IdentityLossConfig(config: Dict)[source]
Bases:
dlk.core.base_module.BaseModuleConfig
Config for IdentityLoss
- Config Example:
>>> { >>> config: { >>> "schedule": [1], >>> "scale": [1], # scale the loss for every schedule >>> // "schedule": [0.3, 1.0], # can be a list or str >>> // "scale": "[0.5, 1]", >>> "loss": "loss", // the real loss from result['loss'] >>> }, >>> _name: "identity", >>> }
dlk.core.losses.mse module
- class dlk.core.losses.mse.MSELoss(config: dlk.core.losses.mse.MSELossConfig)[source]
Bases:
object
mse loss for regression, distill, etc.
- calc(result, inputs, rt_config)[source]
calc the loss the predict is from result, the ground truth is from inputs
- Parameters
result – the model predict dict
inputs – the all inputs for model
rt_config – provide the current training status >>> { >>> “current_step”: self.global_step, >>> “current_epoch”: self.current_epoch, >>> “total_steps”: self.num_training_steps, >>> “total_epochs”: self.num_training_epochs >>> }
- Returns
loss
- update_config(rt_config)[source]
callback for imodel to update the total steps and epochs
when init the loss module, the total step and epoch is not known, when all data ready, the imodel update the value for loss module
- Parameters
rt_config – { “total_steps”: self.num_training_steps, “total_epochs”: self.num_training_epochs}
- Returns
None
- class dlk.core.losses.mse.MSELossConfig(config: Dict)[source]
Bases:
dlk.core.base_module.BaseModuleConfig
Config for MSELoss
- Config Example:
>>> { >>> "config": { >>> "pred_truth_pair": [], # len(.) == 2, the 1st is the pred_name, 2nd is truth_name in __call__ inputs >>> "schedule": [1], >>> "masked_select": null, // if provide, only select the masked(=1) data >>> "scale": [1], # scale the loss for every schedule stage >>> // "schdeule": [0.3, 1.0], # can be a list or str >>> // "scale": "[0.5, 1]", >>> }, >>> "_name": "mse", >>> }
dlk.core.losses.multi_loss module
- class dlk.core.losses.multi_loss.MultiLoss(config: dlk.core.losses.multi_loss.MultiLossConfig)[source]
Bases:
object
This module is NotImplemented yet don’t use it
- calc(result, inputs, rt_config)[source]
calc the loss the predict is from result, the ground truth is from inputs
- Parameters
result – the model predict dict
inputs – the all inputs for model
rt_config – provide the current training status >>> { >>> “current_step”: self.global_step, >>> “current_epoch”: self.current_epoch, >>> “total_steps”: self.num_training_steps, >>> “total_epochs”: self.num_training_epochs >>> }
- Returns
loss
- class dlk.core.losses.multi_loss.MultiLossConfig(config: Dict)[source]
Bases:
object
Config for MultiLoss
- Config Example:
>>> { >>> "loss@the_first": { >>> config: { >>> "ignore_index": -1, >>> "weight": null, # or a list of value for every class >>> "label_smoothing": 0.0, # torch>=1.10 >>> "pred_truth_pair": ["logits1", "label1"], # len(.) == 2, the 1st is the pred_name, 2nd is truth_name in __call__ inputs >>> "schedule": [0.3, 0.6, 1], >>> "scale": [1, 0, 0.5], # scale the loss for every schedule >>> // "schdeule": [0.3, 1.0], >>> // "scale": [0, 1, 0.5], # scale the loss >>> }, >>> _name: "cross_entropy", >>> }, >>> "loss@the_second": { >>> config: { >>> "pred_truth_pair": ["logits2", "label2"], # len(.) == 2, the 1st is the pred_name, 2nd is truth_name in __call__ inputs >>> "schdeule": [0.3, 0.6, 1], >>> "scale": [0, 1, 0.5], # scale the loss for every schedule >>> // "schdeule": [0.3, 1.0], >>> // "scale": [0, 1, 0.5], # scale the loss >>> }, >>> _base: "cross_entropy", // _name or _base is all ok >>> }, >>> config: { >>> "loss_list": ['the_first', 'the_second'], >>> }, >>> _name: "cross_entropy", >>> }
Module contents
losses