callbacks_mixup()
.In the training phase, computes individual losses with regard to two targets, weights them item-wise, and averages the linear combinations to yield the mean batch loss. For validation and testing, defers to the passed-in loss.
nn_mixup_loss(loss)
the underlying loss nn_module
to call. It must
support the reduction
field. During training the attribute will be changed to
'none'
so we get the loss for individual observations. See for for example
documentation for the reduction
argument in torch::nn_cross_entropy_loss()
.
It should be used together with luz_callback_mixup()
.
luz_callback_mixup()