Learn R Programming

MachineShop (version 1.1.0)

GAMBoostModel: Gradient Boosting with Additive Models

Description

Gradient boosting for optimizing arbitrary loss functions, where component-wise arbitrary base-learners, e.g., smoothing procedures, are utilized as additive base-learners.

Usage

GAMBoostModel(family = NULL, baselearner = c("bbs", "bols", "btree",
  "bss", "bns"), dfbase = 4, mstop = 100, nu = 0.1,
  risk = c("inbag", "oobag", "none"), stopintern = FALSE,
  trace = FALSE)

Arguments

family

Family object. Set automatically according to the class type of the response variable.

baselearner

character specifying the component-wise base learner to be used.

dfbase

gobal degrees of freedom for P-spline base learners ("bbs").

mstop

number of initial boosting iterations.

nu

step size or shrinkage parameter between 0 and 1.

risk

method to use in computing the empirical risk for each boosting iteration.

stopintern

logical inidicating whether the boosting algorithm stops internally when the out-of-bag risk increases at a subsequent iteration.

trace

logical indicating whether status information is printed during the fitting process.

Value

MLModel class object.

Details

Response Types:

binary, numeric, Surv

Automatic Tuning Grid Parameters:

mstop

Default values for the NULL arguments and further model details can be found in the source links below.

See Also

gamboost, Family, baselearners, fit, resample, tune

Examples

Run this code
# NOT RUN {
library(MASS)

fit(type ~ ., data = Pima.tr, model = GAMBoostModel())

# }

Run the code above in your browser using DataLab