Methods for models fitted by mixed model boosting algorithms.
# S3 method for mermboost
predict(object, newdata = NULL, RE = TRUE,
type = c("link", "response", "class"), which = NULL,
aggregate = c("sum", "cumsum", "none"), ...)
# S3 method for glmermboost
predict(object, newdata = NULL, RE = TRUE,
type = c("link", "response", "class"), which = NULL,
aggregate = c("sum", "cumsum", "none"), ...)# S3 method for mermboost
ranef (object, iteration = mstop(object), ...)
# S3 method for glmermboost
ranef (object, iteration = mstop(object), ...)
# S3 method for mermboost
VarCorr(x, sigma=1, iteration = mstop(x), ...)
# S3 method for glmermboost
VarCorr(x, sigma=1, iteration = mstop(x), ...)
# S3 method for mer_cv
mstop(object, ...)
# S3 method for mer_cv
plot(x, ...)
The predict.mermboost-methods give a vector, matrix or a list depending on the arguments.
A matrix with cluster-identifier as rownames and random effects as element results from ranef.mermboost.
A VarrCorr.merMod is the result of applying VarCorr.mermboost to a mermboost model.
To deal with cross validtion objects, class mer_cv, mstop.mer_cv gives a numeric value of the optimal stopping iteration while plot.mer_cv plots cross-validation risk-paths.
objects of class glmermboost or mermboost.
If you are using mstop.mer_cv it refers to an
object resulting from mer_cvrisk.
optionally, a data frame in which to look for variables with
which to predict. In case the model was fitted using the matrix
interface to glmermboost, newdata must be a matrix
as well (an error is given otherwise).
If RE = TRUE but not the same cluster-identifier is found in the newdata object,
it gets set to FALSE, RE = FALSE.
a logical values (TRUE/FALSE) indicating whether to include random effects.
a subset of base-learners to take into account for computing
predictions or coefficients. If which is given
(as an integer vector or characters corresponding
to base-learners) a list or matrix is returned.
This ignores the random effects.
the type of prediction required. The default is on the scale
of the predictors; the alternative "response" is on
the scale of the response variable. Thus for a
binomial model the default predictions are of log-odds
(probabilities on logit scale) and type = "response" gives
the predicted probabilities. The "class" option returns
predicted classes for binomial data.
a character specifying how to aggregate predictions
or coefficients of single base-learners. The default
returns the prediction or coefficient for the final number of
boosting iterations. "cumsum" returns a
matrix (one row per base-learner) with the
cumulative coefficients for all iterations
simultaneously (in columns). "none" returns a
list of matrices where the \(j\)th columns of the
respective matrix contains the predictions
of the base-learner of the \(j\)th boosting
iteration (and zero if the base-learner is not
selected in this iteration). Therefore, no random effects
are considered.
an integer input that specifies from which iteration the random component is to be drawn.
an argument used in lme4. Exists for technical reasons but finds no application here.
a cross-validation object for plot.mer_cv or an mermboost object for VarCorr.mermboost.
additional arguments passed to callies.
The methods should correspond to equivalent mboost and lme4 functions. However, additional arguments about random effects handling might be of interest.
mstop.mer_cv and plot.mer_cv
data(Orthodont)
mod <- glmermboost(distance ~ age + Sex + (1 |Subject),
data = Orthodont, family = gaussian,
control = boost_control(mstop = 50))
any(predict(mod, RE = FALSE) == predict(mod, RE = TRUE))
all(predict(mod, RE = FALSE) ==
predict.glmboost(mod) + mod$nuisance()[[mstop(mod)]]$ff
)
ranef(mod)
VarCorr(mod, iteration = 10)
Run the code above in your browser using DataLab