Abstract Tuner
class that implements the base functionality each tuner must
provide. A tuner is an object that describes the tuning strategy, i.e. how to
optimize the black-box function and its feasible set defined by the
TuningInstanceSingleCrit / TuningInstanceMultiCrit object.
A tuner must write its result into the TuningInstanceSingleCrit /
TuningInstanceMultiCrit using the assign_result
method of the
bbotk::OptimInstance at the end of its tuning in order to store the best
selected hyperparameter configuration and its estimated performance vector.
.optimize(instance)
-> NULL
Abstract base method. Implement to specify tuning of your subclass.
See technical details sections.
.assign_result(instance)
-> NULL
Abstract base method. Implement to specify how the final configuration is
selected. See technical details sections.
A subclass is implemented in the following way:
Inherit from Tuner.
Specify the private abstract method $.tune()
and use it to call into
your optimizer.
You need to call instance$eval_batch()
to evaluate design points.
The batch evaluation is requested at the TuningInstanceSingleCrit /
TuningInstanceMultiCrit object instance
, so each batch is possibly
executed in parallel via mlr3::benchmark()
, and all evaluations are stored
inside of instance$archive
.
Before the batch evaluation, the bbotk::Terminator is checked, and if it is
positive, an exception of class "terminated_error"
is generated. In the
later case the current batch of evaluations is still stored in instance
,
but the numeric scores are not sent back to the handling optimizer as it has
lost execution control.
After such an exception was caught we select the best configuration from
instance$archive
and return it.
Note that therefore more points than specified by the bbotk::Terminator may be evaluated, as the Terminator is only checked before a batch evaluation, and not in-between evaluation in a batch. How many more depends on the setting of the batch size.
Overwrite the private super-method .assign_result()
if you want to decide
yourself how to estimate the final configuration in the instance and its
estimated performance. The default behavior is: We pick the best
resample-experiment, regarding the given measure, then assign its
configuration and aggregated performance to the instance.
param_set
param_classes
(character()
).
properties
(character()
).
packages
(character()
).
new()
Creates a new instance of this R6 class.
Tuner$new(param_set, param_classes, properties, packages = character())
param_set
(paradox::ParamSet) Set of control parameters for tuner.
param_classes
(character()
)
Supported parameter classes for learner hyperparameters that the tuner
can optimize, subclasses of paradox::Param.
properties
(character()
)
Set of properties of the tuner. Must be a subset of
mlr_reflections$tuner_properties
.
packages
(character()
)
Set of required packages. Note that these packages will be loaded via
requireNamespace()
, and are not attached.
format()
Helper for print outputs.
Tuner$format()
print()
Print method.
Tuner$print()
(character()
).
optimize()
Performs the tuning on a TuningInstanceSingleCrit or TuningInstanceMultiCrit until termination. The single evaluations will be written into the ArchiveTuning that resides in the TuningInstanceSingleCrit/TuningInstanceMultiCrit. The result will be written into the instance object.
Tuner$optimize(inst)
NULL
clone()
The objects of this class are cloneable with this method.
Tuner$clone(deep = FALSE)
deep
Whether to make a deep clone.
# NOT RUN {
library(mlr3)
library(paradox)
search_space = ParamSet$new(list(
ParamDbl$new("cp", lower = 0.001, upper = 0.1)
))
terminator = trm("evals", n_evals = 3)
instance = TuningInstanceSingleCrit$new(
task = tsk("iris"),
learner = lrn("classif.rpart"),
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
search_space = search_space,
terminator = terminator
)
# swap this line to use a different Tuner
tt = tnr("random_search")
# modifies the instance by reference
tt$optimize(instance)
# returns best configuration and best performance
instance$result
# allows access of data.table / benchmark result of full path of all
# evaluations
instance$archive
# }
Run the code above in your browser using DataLab