
Last chance! 50% off unlimited learning
Sale ends in
Optimizer functions for gradient and likelihood boosting with bamlss
. In each
boosting iteration the function selects the model term with the largest contribution to the
log-likelihood, AIC or BIC.
## Gradient boosting optimizer.
boost(x, y, family, weights = NULL,
offset = NULL, nu = 0.1, nu.adapt = TRUE, df = 4, maxit = 400,
mstop = NULL, maxq = NULL, qsel.splitfactor = FALSE,
verbose = TRUE, digits = 4, flush = TRUE,
eps = .Machine$double.eps^0.25,
nback = NULL, plot = TRUE, initialize = TRUE,
stop.criterion = NULL, select.type = 1,
force.stop = TRUE, hatmatrix = !is.null(stop.criterion),
reverse.edf = FALSE, approx.edf = FALSE,
always = FALSE, ...)## Modified likelihood based boosting.
boostm(x, y, family, offset = NULL,
nu = 0.1, df = 3, maxit = 400, mstop = NULL,
verbose = TRUE, digits = 4, flush = TRUE,
eps = .Machine$double.eps^0.25, plot = TRUE,
initialize = TRUE, stop.criterion = "BIC",
force.stop = !is.null(stop.criterion),
do.optim = TRUE, always = FALSE, ...)
## Boosting summary extractor.
boost_summary(object, ...)
## Plot all boosting paths.
boost_plot(x, which = c("loglik", "loglik.contrib", "parameters",
"aic", "bic", "user"), intercept = TRUE, spar = TRUE, mstop = NULL,
name = NULL, drop = NULL, labels = NULL, color = NULL, ...)
## Boosting summary printing and plotting.
# S3 method for boost_summary
print(x, summary = TRUE, plot = TRUE,
which = c("loglik", "loglik.contrib"), intercept = TRUE,
spar = TRUE, ...)
# S3 method for boost_summary
plot(x, ...)
## Model frame for out-of-sample selection.
boost_frame(formula, train, test, family = "gaussian", ...)
For function boost()
the x
list, as returned from function
bamlss.frame
, holding all model matrices and other information that is used for
fitting the model. For the plotting function the corresponding bamlss
object
fitted with the boost()
optimizer.
The model response, as returned from function bamlss.frame
.
A bamlss family object, see family.bamlss
.
Prior weights on the data, as returned from function bamlss.frame
.
Can be used to supply model offsets for use in fitting,
returned from function bamlss.frame
.
Numeric, between [0, 1], controls the step size, i.e., the amount that should be added to model term parameters.
Logical. If set to TRUE (default) step size nu
is divided by 2,
if current boosting iteration did not improve the loglikelihood.
Integer, defines the initial degrees of freedom that should be assigned
to each smooth model term. May also be a named vector, the names must match the model term
labels, e.g., as provided in summary.bamlss
.
Integer, the maximum number of boosting iterations.
For convenience, overwrites maxit
.
Integer, defines the maximum number of selected base-learners. The algorithm stops if this numer is exceeded.
Logical, if set to TRUE
dummy variables of categorical predictors are counted individually.
Character, the name of the coefficient (group) that should be plotted. Note that
the string provided in name
will be removed from the labels on the 4th axis.
Character, the name of the coefficient (group) that should not be plotted.
A character string of labels that should be used on the 4 axis.
Colors or color function that creates colors for the (group) paths.
Print information during runtime of the algorithm.
Set the digits for printing when verbose = TRUE
.
use flush.console
for displaying the current output in the console.
The tolerance used as stopping mechanism, see argument nback
.
Integer. If nback
is not NULL
, then the algorithm stops if the
the change in the log-likelihood of the last nback
iterations is smaller or
equal to eps
. If maxit = NULL
the maximum number of iterations is set to 10000.
Should the boosting summary be printed and plotted?
Logical, should intercepts be initialized?
Character, selects the information criterion that should be used
to determine the optimum number of boosting iterations. Either "AIC"
or "BIC"
is possible. Note that this feature requires to compute hat-matrices for each distributional
parameter, therefore, the routine may be slow and computer storage intensive.
Should model terms be selected by the log-likelihood contribution,
select.type = 1
, or by the corresponding stop.criterion
, select.type = 2
.
Logical, should the algorithm stop if the information criterion increases?
Logical. Should smoothing parameters be optimized in each boosting iteration?
Logical, if set to TRUE
the hat-matrices for each distributional parameter
will be computed. The hat-matrices are used to determine the effective (equivalent) degrees of
freedom in each boosting iteration, i.e., it is possible to compute information criteria
like the AIC or BIC for selecting the optimum number of boosting iterations.
Logical. Instead of computing degrees of freedom with hat-matrices, the actual smoothing parameters are reverse engineered to compute the corresponding actual smoother matrix. Note that this option is still experimental.
Logical. Another experimental and fast approximation of the degrees of freedom.
Logical or character. Should the intercepts forced to be updated in each boosting iteration?
If always = TRUE
each intercept of each distributional parameter is updated,
if always = "best"
only the intercept corresponding to the distributional of the best
fitting model term is updated.
A bamlss
object that was fitted using boost()
.
Should the summary be printed?
Which of the three provided plots should be created?
Should the coefficient paths of intercepts be dropped in the plot?
Should graphical parmeters be set with par
?
See bamlss.frame
.
Data frames used for training and testing the model..
For function boost()
, arguments passed to bamlss.engine.setup
.
for function boost_summary()
arguments passed to function print.boost_summary()
.
For function boost_summary()
a list containing information on selection frequencies etc.
For function boost()
and boostm()
a list containing the following objects:
A named list of the fitted values based on the last boosting iteration of the modeled parameters of the selected distribution.
A matrix, each row corresponds to the parameter values of one boosting iteration.
The boosting summary which can be printed and plotted.
The function does not take care of variable scaling for the linear parts! This must be done by the
user, e.g., one option is to use argument scale.d
in function bamlss.frame
,
which uses scale
.
Function boost()
does not select the optimum stopping iteration! The modified likelihood
based algorithm implemented in function boostm()
is still experimental!
# NOT RUN {
## Simulate data.
set.seed(123)
d <- GAMart()
## Estimate model.
f <- num ~ x1 + x2 + x3 + lon + lat +
s(x1) + s(x2) + s(x3) + s(lon) + s(lat) + te(lon,lat)
b <- bamlss(f, data = d, optimizer = boost,
sampler = FALSE, scale.d = TRUE, nu = 0.01,
maxit = 1000, plot = FALSE)
## Plot estimated effects.
plot(b)
## Print and plot the boosting summary.
boost_summary(b, plot = FALSE)
boost_plot(b, which = 1)
boost_plot(b, which = 2)
boost_plot(b, which = 3, name = "mu.s.te(lon,lat).")
## Extract estimated parameters for certain
## boosting iterations.
parameters(b, mstop = 1)
parameters(b, mstop = 100)
## Also works with predict().
head(do.call("cbind", predict(b, mstop = 1)))
head(do.call("cbind", predict(b, mstop = 100)))
## Another example using the modified liklihood
## bootsing algorithm.
f <- list(
num ~ x1 + x2 + x3 + lon + lat +
s(x1) + s(x2) + s(x3) + s(lon) + s(lat) + te(lon,lat),
sigma ~ x1 + x2 + x3 + lon + lat +
s(x1) + s(x2) + s(x3) + s(lon) + s(lat) + te(lon,lat)
)
b <- bamlss(f, data = d, optimizer = boostm,
sampler = FALSE, scale.d = TRUE, nu = 0.05,
maxit = 400, stop.criterion = "AIC", force.stop = FALSE)
## Plot estimated effects.
plot(b)
## Plot AIC and log-lik contributions.
boost_plot(b, "AIC")
boost_plot(b, "loglik.contrib")
## Out-of-sample selection of model terms.
set.seed(123)
d <- GAMart(n = 5000)
## Split data into training and testing
i <- sample(1:2, size = nrow(d), replace = TRUE)
d.test <- subset(d, i == 1)
d.train <- subset(d, i == 2)
## Model formula
f <- list(
num ~ s(x1) + s(x2) + s(x3),
sigma ~ s(x1) + s(x2) + s(x3)
)
## Create model frame for out-of-sample selection.
sm <- boost_frame(f, train = d.train, test = d.test, family = "gaussian")
## Out-of-sample selection function.
sfun <- function(parameters) {
sm$parameters <- parameters
p <- predict(sm, type = "parameter")
-1 * sum(sm$family$d(d.test$num, p, log = TRUE))
}
## Start boosting with out-of-sample negative
## log-likelihood selection of model terms.
b <- bamlss(f, data = d.train, sampler = FALSE, optimizer = boost,
selectfun = sfun, always = "best")
## Plot curve of negative out-of-sample log-likelihood.
boost_plot(b, which = "user")
# }
Run the code above in your browser using DataLab