mlr3tuning (version 0.5.0)

TuningInstanceSingleCrit: Single Criterion Tuning Instance

Description

Specifies a general single-criteria tuning scenario, including objective function and archive for Tuners to act upon. This class stores an ObjectiveTuning object that encodes the black box objective function which a Tuner has to optimize. It allows the basic operations of querying the objective at design points ($eval_batch()), storing the evaluations in the internal Archive and accessing the final result ($result).

Evaluations of hyperparameter configurations are performed in batches by calling mlr3::benchmark() internally. Before a batch is evaluated, the bbotk::Terminator is queried for the remaining budget. If the available budget is exhausted, an exception is raised, and no further evaluations can be performed from this point on.

The tuner is also supposed to store its final result, consisting of a selected hyperparameter configuration and associated estimated performance values, by calling the method instance$assign_result.

Arguments

Super classes

bbotk::OptimInstance -> bbotk::OptimInstanceSingleCrit -> TuningInstanceSingleCrit

Active bindings

result_learner_param_vals

(list()) Param values for the optimal learner call.

Methods

Public methods

Method new()

Creates a new instance of this R6 class.

This defines the resampled performance of a learner on a task, a feasibility region for the parameters the tuner is supposed to optimize, and a termination criterion.

Usage

TuningInstanceSingleCrit$new(
  task,
  learner,
  resampling,
  measure,
  terminator,
  search_space = NULL,
  store_benchmark_result = TRUE,
  store_models = FALSE,
  check_values = FALSE
)

Arguments

task

(mlr3::Task) Task to operate on.

learner

(mlr3::Learner).

resampling

(mlr3::Resampling) Uninstantiated resamplings are instantiated during construction so that all configurations are evaluated on the same data splits.

measure

(mlr3::Measure) Measure to optimize.

terminator

(Terminator).

search_space

(paradox::ParamSet).

store_benchmark_result

(logical(1)) Store benchmark result in archive?

store_models

(logical(1)) Store models in benchmark result?

check_values

(logical(1)) Should parameters before the evaluation and the results be checked for validity?

Method assign_result()

The Tuner object writes the best found point and estimated performance value here. For internal use.

Usage

TuningInstanceSingleCrit$assign_result(xdt, y, learner_param_vals = NULL)

Arguments

xdt

(data.table::data.table()) x values as data.table. Each row is one point. Contains the value in the search space of the TuningInstanceMultiCrit object. Can contain additional columns for extra information.

y

(numeric(1)) Optimal outcome.

learner_param_vals

(list()) Fixed parameter values of the learner that are neither part of the

Method clone()

The objects of this class are cloneable with this method.

Usage

TuningInstanceSingleCrit$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

Run this code
# NOT RUN {
library(data.table)
library(paradox)
library(mlr3)

# Objects required to define the performance evaluator:
task = tsk("iris")
learner = lrn("classif.rpart")
resampling = rsmp("holdout")
measure = msr("classif.ce")
param_set = ParamSet$new(list(
  ParamDbl$new("cp", lower = 0.001, upper = 0.1),
  ParamInt$new("minsplit", lower = 1, upper = 10))
)

terminator = trm("evals", n_evals = 5)
inst = TuningInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  search_space = param_set,
  terminator = terminator
)

# first 4 points as cross product
design = CJ(cp = c(0.05, 0.01), minsplit = c(5, 3))
inst$eval_batch(design)
inst$archive

# try more points, catch the raised terminated message
tryCatch(
  inst$eval_batch(data.table(cp = 0.01, minsplit = 7)),
  terminated_error = function(e) message(as.character(e))
)

# try another point although the budget is now exhausted
# -> no extra evaluations
tryCatch(
  inst$eval_batch(data.table(cp = 0.01, minsplit = 9)),
  terminated_error = function(e) message(as.character(e))
)

inst$archive

### Error handling
# get a learner which breaks with 50% probability
# set encapsulation + fallback
learner = lrn("classif.debug", error_train = 0.5)
learner$encapsulate = c(train = "evaluate", predict = "evaluate")
learner$fallback = lrn("classif.featureless")

param_set = ParamSet$new(list(
  ParamDbl$new("x", lower = 0, upper = 1)
))

inst = TuningInstanceSingleCrit$new(
  task = tsk("wine"),
  learner = learner,
  resampling = rsmp("cv", folds = 3),
  measure = msr("classif.ce"),
  search_space = param_set,
  terminator = trm("evals", n_evals = 5)
)

tryCatch(
  inst$eval_batch(data.table(x = 1:5 / 5)),
  terminated_error = function(e) message(as.character(e))
)

archive = inst$archive$data()

# column errors: multiple errors recorded
print(archive)
# }

Run the code above in your browser using DataCamp Workspace