Learn R Programming

mlr3tuning (version 0.9.0)

AutoTuner: AutoTuner

Description

The AutoTuner is a mlr3::Learner which wraps another mlr3::Learner and performs the following steps during $train():

  1. The hyperparameters of the wrapped (inner) learner are trained on the training data via resampling. The tuning can be specified by providing a Tuner, a bbotk::Terminator, a search space as paradox::ParamSet, a mlr3::Resampling and a mlr3::Measure.

  2. The best found hyperparameter configuration is set as hyperparameters for the wrapped (inner) learner stored in at$learner. Access the tuned hyperparameters via at$learner$param_set$values.

  3. A final model is fit on the complete training data using the now parametrized wrapped learner. The respective model is available via field at$learner$model.

During $predict() the AutoTuner just calls the predict method of the wrapped (inner) learner. A set timeout is disabled while fitting the final model.

Note that this approach allows to perform nested resampling by passing an AutoTuner object to mlr3::resample() or mlr3::benchmark(). To access the inner resampling results, set store_tuning_instance = TRUE and execute mlr3::resample() or mlr3::benchmark() with store_models = TRUE (see examples).

Arguments

Super class

mlr3::Learner -> AutoTuner

Public fields

instance_args

(list()) All arguments from construction to create the TuningInstanceSingleCrit.

tuner

(Tuner).

Active bindings

archive

ArchiveTuning Archive of the TuningInstanceSingleCrit.

learner

(mlr3::Learner) Trained learner

tuning_instance

(TuningInstanceSingleCrit) Internally created tuning instance with all intermediate results.

tuning_result

(data.table::data.table) Short-cut to result from TuningInstanceSingleCrit.

predict_type

(character(1)) Stores the currently active predict type, e.g. "response". Must be an element of $predict_types.

hash

(character(1)) Hash (unique identifier) for this object.

Methods

Public methods

Method new()

Creates a new instance of this R6 class.

Usage

AutoTuner$new(
  learner,
  resampling,
  measure,
  terminator,
  tuner,
  search_space = NULL,
  store_tuning_instance = TRUE,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE
)

Arguments

learner

(mlr3::Learner) Learner to tune, see TuningInstanceSingleCrit.

resampling

(mlr3::Resampling) Resampling strategy during tuning, see TuningInstanceSingleCrit. This mlr3::Resampling is meant to be the inner resampling, operating on the training set of an arbitrary outer resampling. For this reason it is not feasible to pass an instantiated mlr3::Resampling here.

measure

(mlr3::Measure) Performance measure to optimize.

terminator

(bbotk::Terminator) When to stop tuning, see TuningInstanceSingleCrit.

tuner

(Tuner) Tuning algorithm to run.

search_space

(paradox::ParamSet) Hyperparameter search space. If NULL, the search space is constructed from the TuneToken in the ParamSet of the learner.

store_tuning_instance

(logical(1)) If TRUE (default), stores the internally created TuningInstanceSingleCrit with all intermediate results in slot $tuning_instance.

store_benchmark_result

(logical(1)) If TRUE (default), stores the mlr3::BenchmarkResult in archive.

store_models

(logical(1)) If FALSE (default), the fitted models are not stored in the mlr3::BenchmarkResult. If store_benchmark_result = FALSE, the models are only stored temporarily and not accessible after the tuning. This combination might be useful for measures that require a model.

check_values

(logical(1)) Should parameters before the evaluation and the results be checked for validity?

Method base_learner()

Extracts the base learner from nested learner objects like GraphLearner in mlr3pipelines. If recursive = 0, the (tuned) learner is returned.

Usage

AutoTuner$base_learner(recursive = Inf)

Arguments

recursive

(integer(1)) Depth of recursion for multiple nested objects.

Returns

Learner. Printer.

Method print()

Usage

AutoTuner$print()

Arguments

...

(ignored).

Method clone()

The objects of this class are cloneable with this method.

Usage

AutoTuner$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

Run this code
# NOT RUN {
task = tsk("pima")
train_set = sample(task$nrow, 0.8 * task$nrow)
test_set = setdiff(seq_len(task$nrow), train_set)

at = AutoTuner$new(
  learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE)),
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  terminator = trm("evals", n_evals = 5),
  tuner = tnr("random_search"))

# tune hyperparameters and fit final model
at$train(task, row_ids = train_set)

# predict with final model
at$predict(task, row_ids = test_set)

# show tuning result
at$tuning_result

# model slot contains trained learner and tuning instance
at$model

# shortcut trained learner
at$learner

# shortcut tuning instance
at$tuning_instance


### nested resampling

at = AutoTuner$new(
  learner = lrn("classif.rpart", cp = to_tune(1e-04, 1e-1, logscale = TRUE)),
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  terminator = trm("evals", n_evals = 5),
  tuner = tnr("random_search"))

resampling_outer = rsmp("cv", folds = 3)
rr = resample(task, at, resampling_outer, store_models = TRUE)

# retrieve inner tuning results.
extract_inner_tuning_results(rr)

# performance scores estimated on the outer resampling
rr$score()

# unbiased performance of the final model trained on the full data set
rr$aggregate()
# }

Run the code above in your browser using DataLab