## S3 method for class 'glmboost':
print(x, ...)
## S3 method for class 'gamboost':
print(x, ...)
## S3 method for class 'glmboost':
coef(object, ...)
## S3 method for class 'gamboost':
coef(object, ...)
## S3 method for class 'gamboost':
AIC(object, method = c("corrected", "classical", "gMDL"), ..., k = 2)
## S3 method for class 'glmboost':
AIC(object, method = c("corrected", "classical", "gMDL"),
df = c("trace", "actset"), ..., k = 2)
## S3 method for class 'gbAIC':
mstop(object, ...)
## S3 method for class 'gb':
mstop(object, ...)
## S3 method for class 'cvrisk':
mstop(object, ...)
## S3 method for class 'blackboost':
mstop(object, ...)
## S3 method for class 'gb':
predict(object, newdata = NULL, type = c("lp", "response"),
allIterations = FALSE, ...)
## S3 method for class 'blackboost':
predict(object, newdata = NULL, type = c("lp", "response"),
allIterations = FALSE, ...)
## S3 method for class 'gb':
fitted(object, type = c("lp", "response"), ...)
## S3 method for class 'gb':
logLik(object, ...)
glmboost
, gamboost
,
blackboost
or gbAIC
.glmboost
or gamboost
.trace
defines degrees of freedom by the trace of the
boosting hat matrix and actset
uses the number of
non-zero coefficientsk = 2
is the classical AIC. Only used when method = "classical"
.print
shows a dense representation of the model fit and coef
extracts the
regression coefficients of a linear model fitted using the glmboost
function or the gamboost
function. The predict
function can be used to predict the status of the response variable
for new observations whereas fitted
extracts the regression fit for the observations
in the learning sample. When allIterations = TRUE
, the matrix
of all (linear) predictors for boosting iterations 1 to mstop
is returned.
For (generalized) linear and additive models, the AIC
function can be used
to compute both the classical and corrected AIC (Hurvich et al., 1998, only available
when family = GaussReg()
was used), which is useful for the determination
of the optimal number of boosting iterations to be applied (which can be extracted via
mstop
). The degrees of freedom are either computed via the trace of the
boosting hat matrix (which is rather slow even for moderate sample sizes)
or the number of variables (non-zero coefficients) that entered the model so far
(faster but only meaningful for linear models fitted via gamboost
(see Hastie, 2007).
In addition, the general Minimum Description Length criterion (Buhlmann and Yu, 2006)
can be computed using function AIC
.
Note that logLik
and AIC
only make sense when the corresponding
Family
implements the appropriate loss function.
Peter Buhlmann and Torsten Hothorn (2007), Boosting algorithms: regularization, prediction and model fitting. Statistical Science, 22(4), 477--505.
Travor Hastie (2007), Discussion of ``Boosting Algorithms: Regularization, Prediction and Model Fitting'' by Peter Buhlmann and Torsten Hothorn. Statistical Science, 22(4), 505.
Peter Buhlmann and Bin Yu (2006), Sparse Boosting. Journal of Machine Learning Research, 7, 1001--1024.
gamboost
, glmboost
and
blackboost
for model fitting. See cvrisk
for
cross-validated stoping iteration.### a simple two-dimensional example: cars data
cars.gb <- glmboost(dist ~ speed, data = cars,
control = boost_control(mstop = 2000))
cars.gb
### initial number of boosting iterations
mstop(cars.gb)
### AIC criterion
aic <- AIC(cars.gb, method = "corrected")
aic
### coefficients for optimal number of boosting iterations
coef(cars.gb[mstop(aic)])
plot(cars$dist, predict(cars.gb[mstop(aic)]),
ylim = range(cars$dist))
abline(a = 0, b = 1)
Run the code above in your browser using DataLab