
Last chance! 50% off unlimited learning
Sale ends in
Loop function for multi-objective Bayesian Optimization via ParEGO. Normally used inside an OptimizerMbo.
In each iteration after the initial design, the observed objective function values are normalized and q
candidates are
obtained by scalarizing these values via the augmented Tchebycheff function, updating the surrogate with respect to
these scalarized values and optimizing the acquisition function.
bayesopt_parego(
instance,
surrogate,
acq_function,
acq_optimizer,
init_design_size = NULL,
q = 1L,
s = 100L,
rho = 0.05,
random_interleave_iter = 0L
)
invisible(instance)
The original instance is modified in-place and returned invisible.
(bbotk::OptimInstanceBatchMultiCrit)
The bbotk::OptimInstanceBatchMultiCrit to be optimized.
(SurrogateLearner)
SurrogateLearner to be used as a surrogate.
(AcqFunction)
AcqFunction to be used as acquisition function.
(AcqOptimizer)
AcqOptimizer to be used as acquisition function optimizer.
(NULL
| integer(1)
)
Size of the initial design.
If NULL
and the bbotk::ArchiveBatch contains no evaluations, 4 * d
is used with d
being the
dimensionality of the search space.
Points are generated via a Sobol sequence.
(integer(1)
)
Batch size, i.e., the number of candidates to be obtained for a single batch.
Default is 1
.
(integer(1)
)
100
.
(numeric(1)
)
0.05
(integer(1)
)
Every random_interleave_iter
iteration (starting after the initial design), a point is
sampled uniformly at random and evaluated (instead of a model based proposal).
For example, if random_interleave_iter = 2
, random interleaving is performed in the second,
fourth, sixth, ... iteration.
Default is 0
, i.e., no random interleaving is performed at all.
Knowles, Joshua (2006). “ParEGO: A Hybrid Algorithm With On-Line Landscape Approximation for Expensive Multiobjective Optimization Problems.” IEEE Transactions on Evolutionary Computation, 10(1), 50--66.
Other Loop Function:
loop_function
,
mlr_loop_functions
,
mlr_loop_functions_ego
,
mlr_loop_functions_emo
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_smsego
# \donttest{
if (requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
library(bbotk)
library(paradox)
library(mlr3learners)
fun = function(xs) {
list(y1 = xs$x^2, y2 = (xs$x - 2) ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y1 = p_dbl(tags = "minimize"), y2 = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchMultiCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
surrogate = default_surrogate(instance, n_learner = 1)
acq_function = acqf("ei")
acq_optimizer = acqo(
optimizer = opt("random_search", batch_size = 100),
terminator = trm("evals", n_evals = 100))
optimizer = opt("mbo",
loop_function = bayesopt_parego,
surrogate = surrogate,
acq_function = acq_function,
acq_optimizer = acq_optimizer)
optimizer$optimize(instance)
}
# }
Run the code above in your browser using DataLab