OptimizerAsyncSuccessiveHalving
class that implements the Asynchronous Successive Halving Algorithm (ASHA).
This class implements the asynchronous version of OptimizerBatchSuccessiveHalving.
This mlr3tuning::Tuner can be instantiated via the dictionary
mlr3tuning::mlr_tuners or with the associated sugar function mlr3tuning::tnr()
:
TunerAsyncSuccessiveHalving$new()
mlr_tuners$get("async_successive_halving")
tnr("async_successive_halving")
If the learner lacks a natural budget parameter, mlr3pipelines::PipeOpSubsample can be applied to use the subsampling rate as budget parameter. The resulting mlr3pipelines::GraphLearner is fitted on small proportions of the mlr3::Task in the first stage, and on the complete task in last stage.
Hyperband supports custom paradox::Sampler object for initial configurations in each bracket. A custom sampler may look like this (the full example is given in the examples section):
# - beta distribution with alpha = 2 and beta = 5
# - categorical distribution with custom probabilities
sampler = SamplerJointIndep$new(list(
Sampler1DRfun$new(params[[2]], function(n) rbeta(n, 2, 5)),
Sampler1DCateg$new(params[[3]], prob = c(0.2, 0.3, 0.5))
))
eta
numeric(1)
With every stage, the budget is increased by a factor of eta
and only the best 1 / eta
configurations are promoted to the next stage.
Non-integer values are supported, but eta
is not allowed to be less or equal to 1.
sampler
paradox::Sampler
Object defining how the samples of the parameter space should be drawn.
The default is uniform sampling.
The bbotk::Archive holds the following additional columns that are specific to SHA:
stage
(integer(1))
Stage index. Starts counting at 0.
asha_id
(character(1))
Unique identifier for each configuration across stages.
mlr3tuning::Tuner
-> mlr3tuning::TunerAsync
-> mlr3tuning::TunerAsyncFromOptimizerAsync
-> TunerAsyncSuccessiveHalving