Given loss.vec L_i, model.complexity K_i, the model selection
function i*(lambda) = argmin_i L_i + lambda*K_i, compute all of
the solutions (i, min.lambda, max.lambda) with i being the
solution for every lambda in (min.lambda, max.lambda). Use this
function after having computed changepoints and loss
values for
each model, and before using labelError
. This function uses the
linear time algorithm implemented in C code (modelSelectionC
).
modelSelection(models,
loss = "loss", complexity = "complexity")
data.frame with one row per model. There must be at least two columns models[[loss]] and models[[complexity]], but there can also be other meta-data columns.
character: column name of models
to interpret as loss
L_i.
character: column name of models
to interpret as complexity
K_i.
data.frame with a row for each model that can be selected for at
least one lambda value, and the following columns. (min.lambda,
max.lambda) and (min.log.lambda, max.log.lambda) are intervals of
optimal penalty constants, on the original and log scale; the
other columns (and rownames) are taken from models
. This should be
used as the models
argument of labelError
.