Each epoch, if there's improvement in the monitored metric we serialize the model weights to a temp file. When training is done, we reload weights from the best model.
luz_callback_keep_best_model(
monitor = "valid_loss",
mode = "min",
min_delta = 0
)
A string in the format <set>_<metric>
where <set>
can be
'train' or 'valid' and <metric>
can be the abbreviation of any metric
that you are tracking during training. The metric name is case insensitive.
Specifies the direction that is considered an improvement. By default 'min' is used. Can also be 'max' (higher is better) and 'zero' (closer to zero is better).
Minimum improvement to reset the patience counter.
Other luz_callbacks:
luz_callback()
,
luz_callback_auto_resume()
,
luz_callback_csv_logger()
,
luz_callback_early_stopping()
,
luz_callback_interrupt()
,
luz_callback_lr_scheduler()
,
luz_callback_metrics()
,
luz_callback_mixed_precision()
,
luz_callback_mixup()
,
luz_callback_model_checkpoint()
,
luz_callback_profile()
,
luz_callback_progress()
,
luz_callback_resume_from_checkpoint()
,
luz_callback_train_valid()
cb <- luz_callback_keep_best_model()
Run the code above in your browser using DataLab