Learn R Programming

mlr3tuning (version 0.9.0)

TuningInstanceSingleCrit: Single Criterion Tuning Instance

Description

Specifies a general single-criteria tuning scenario, including objective function and archive for Tuners to act upon. This class stores an ObjectiveTuning object that encodes the black box objective function which a Tuner has to optimize. It allows the basic operations of querying the objective at design points ($eval_batch()), storing the evaluations in the internal Archive and accessing the final result ($result).

Evaluations of hyperparameter configurations are performed in batches by calling mlr3::benchmark() internally. Before a batch is evaluated, the bbotk::Terminator is queried for the remaining budget. If the available budget is exhausted, an exception is raised, and no further evaluations can be performed from this point on.

The tuner is also supposed to store its final result, consisting of a selected hyperparameter configuration and associated estimated performance values, by calling the method instance$assign_result.

Arguments

Super classes

bbotk::OptimInstance -> bbotk::OptimInstanceSingleCrit -> TuningInstanceSingleCrit

Active bindings

result_learner_param_vals

(list()) Param values for the optimal learner call.

Methods

Public methods

Method new()

Creates a new instance of this R6 class.

This defines the resampled performance of a learner on a task, a feasibility region for the parameters the tuner is supposed to optimize, and a termination criterion.

Usage

TuningInstanceSingleCrit$new(
  task,
  learner,
  resampling,
  measure,
  terminator,
  search_space = NULL,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE
)

Arguments

task

(mlr3::Task) Task to operate on.

learner

(mlr3::Learner).

resampling

(mlr3::Resampling) Resampling that is used to evaluated the performance of the hyperparameter configurations. Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits. Already instantiated resamplings are kept unchanged. Specialized Tuner change the resampling e.g. to evaluate a hyperparameter configuration on different data splits. This field, however, always returns the resampling passed in construction.

measure

(mlr3::Measure) Measure to optimize.

terminator

(Terminator).

search_space

(paradox::ParamSet) Hyperparameter search space. If NULL, the search space is constructed from the TuneToken in the ParamSet of the learner.

store_benchmark_result

(logical(1)) If TRUE (default), stores the mlr3::BenchmarkResult in archive.

store_models

(logical(1)) If FALSE (default), the fitted models are not stored in the mlr3::BenchmarkResult. If store_benchmark_result = FALSE, the models are only stored temporarily and not accessible after the tuning. This combination might be useful for measures that require a model.

check_values

(logical(1)) Should parameters before the evaluation and the results be checked for validity?

Method assign_result()

The Tuner object writes the best found point and estimated performance value here. For internal use.

Usage

TuningInstanceSingleCrit$assign_result(xdt, y, learner_param_vals = NULL)

Arguments

xdt

(data.table::data.table()) x values as data.table. Each row is one point. Contains the value in the search space of the TuningInstanceMultiCrit object. Can contain additional columns for extra information.

y

(numeric(1)) Optimal outcome.

learner_param_vals

(list()) Fixed parameter values of the learner that are neither part of the

Method clone()

The objects of this class are cloneable with this method.

Usage

TuningInstanceSingleCrit$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

Run this code
# NOT RUN {
library(data.table)

# define search space
search_space = ps(
  cp = p_dbl(lower = 0.001, upper = 0.1),
  minsplit = p_int(lower = 1, upper = 10)
)

# initialize instance
instance = TuningInstanceSingleCrit$new(
  task = tsk("iris"),
  learner = lrn("classif.rpart"),
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  search_space = search_space,
  terminator = trm("evals", n_evals = 5)
)

# generate design
design = data.table(cp = c(0.05, 0.01), minsplit = c(5, 3))

# eval design
instance$eval_batch(design)

# show archive
instance$archive

### error handling

# get a learner which breaks with 50% probability
# set encapsulation + fallback
learner = lrn("classif.debug", error_train = 0.5)
learner$encapsulate = c(train = "evaluate", predict = "evaluate")
learner$fallback = lrn("classif.featureless")

# define search space
search_space = ps(
 x = p_dbl(lower = 0, upper = 1)
)

instance = TuningInstanceSingleCrit$new(
  task = tsk("wine"),
  learner = learner,
  resampling = rsmp("cv", folds = 3),
  measure = msr("classif.ce"),
  search_space = search_space,
  terminator = trm("evals", n_evals = 5)
)

instance$eval_batch(data.table(x = 1:5 / 5))
# }

Run the code above in your browser using DataLab