GAMBoost (version 1.2-3)

optimGAMBoostPenalty: Coarse line search for adequate GAMBoost penalty parameter

Description

This routine helps in finding a penalty value that leads to an ``optimal'' number of boosting steps for GAMBoost (determined by AIC or cross-validation) that is not too small/in a specified range.

Usage

optimGAMBoostPenalty(x=NULL,y,x.linear=NULL, minstepno=50,maxstepno=200,start.penalty=500, method=c("AICmin","CVmin"),penalty=100,penalty.linear=100, just.penalty=FALSE,iter.max=10,upper.margin=0.05, trace=TRUE,parallel=FALSE,calc.hat=TRUE,calc.se=TRUE, which.penalty=ifelse(!is.null(x),"smoothness","linear"),...)

Arguments

x
n * p matrix of covariates with potentially non-linear influence. If this is not given (and argument x.linear is employed), a generalized linear model is fitted.
y
response vector of length n.
x.linear
optional n * q matrix of covariates with linear influence.
minstepno, maxstepno
range of boosting steps in which the ``optimal'' number of boosting steps is wanted to be.
start.penalty
start value for the search for the appropriate penalty.
method
determines how the optimal number of boosting steps corresponding to a fixed penalty is evaluated. With "AICmin" the AIC is used and with "CVmin" cross-validation is used as a criterion.
penalty,penalty.linear
penalty values for the respective penalty that is not optimized.
just.penalty
logical value indicating whether just the optimal penalty value should be returned or a GAMBoost fit performed with this penalty.
iter.max
maximum number of search iterations.
upper.margin
specifies the fraction of maxstepno which is used as an upper margin in which an AIC/cross-validation minimum is not taken to be one. (Necessary because of random fluctuations of these criteria).
parallel
logical value indicating whether evaluation of cross-validation folds should be performed in parallel on a compute cluster, when using method="CVmin". This requires library snowfall.
calc.hat,calc.se
arguments passed to GAMBoost.
which.penalty
indicates whether the penalty for the smooth components (value "smoothness") or for the linear components ("linear") should be optimized.
trace
logical value indicating whether information on progress should be printed.
...
miscellaneous parameters for GAMBoost.

Value

GAMBoost fit with the optimal penalty (with an additional component optimGAMBoost.criterion giving the values of the criterion (AIC or cross-validation) corresponding to the final penalty) or just the optimal penalty value itself.

Details

The penalty parameter(s) for GAMBoost have to be chosen only very coarsely. In Tutz and Binder (2006) it is suggested just to make sure, that the optimal number of boosting steps (according to AIC or cross-validation) is larger or equal to 50. With a smaller number of steps boosting may become too ``greedy'' and show sub-optimal performance. This procedure uses very a coarse line search and so one should specify a rather large range of boosting steps.

Penalty optimization based on AIC should work fine most of the time, but for a large number of covariates (e.g. 500 with 100 observations) problems arise and (more costly) cross-validation should be employed.

References

Tutz, G. and Binder, H. (2006) Generalized additive modelling with implicit variable selection by likelihood based boosting. Biometrics, 51, 961--971.

See Also

GAMBoost

Examples

Run this code
## Not run: 
# ##  Generate some data 
# 
# x <- matrix(runif(100*8,min=-1,max=1),100,8)             
# eta <- -0.5 + 2*x[,1] + 2*x[,3]^2
# y <- rbinom(100,1,binomial()$linkinv(eta))
# 
# ##  Find a penalty (starting from a large value, here: 5000) 
# ##  that leads to an optimal number of boosting steps (based in AIC) 
# ##  in the range [50,200] and return a GAMBoost fit with
# ##  this penalty
# 
# opt.gb1 <- optimGAMBoostPenalty(x,y,minstepno=50,maxstepno=200,
#                                 start.penalty=5000,family=binomial(),
#                                 trace=TRUE)
# 
# #   extract the penalty found/used for the fit
# opt.gb1$penalty
# 
# ## End(Not run)

Run the code above in your browser using DataLab