powered by
Gradient boosting for optimizing arbitrary loss functions, where component-wise arbitrary base-learners, e.g., smoothing procedures, are utilized as additive base-learners.
GAMBoostModel( family = NULL, baselearner = c("bbs", "bols", "btree", "bss", "bns"), dfbase = 4, mstop = 100, nu = 0.1, risk = c("inbag", "oobag", "none"), stopintern = FALSE, trace = FALSE )
optional Family object. Set automatically according to the class type of the response variable.
Family
character specifying the component-wise base learner to be used.
base learner
gobal degrees of freedom for P-spline base learners ("bbs").
"bbs"
number of initial boosting iterations.
step size or shrinkage parameter between 0 and 1.
method to use in computing the empirical risk for each boosting iteration.
logical inidicating whether the boosting algorithm stops internally when the out-of-bag risk increases at a subsequent iteration.
logical indicating whether status information is printed during the fitting process.
MLModel class object.
MLModel
binary, numeric, Surv
binary
numeric
Surv
mstop
Default values for the NULL arguments and further model details can be found in the source links below.
NULL
gamboost, Family, baselearners, fit, resample
gamboost
baselearners
fit
resample
# NOT RUN { library(MASS) fit(type ~ ., data = Pima.tr, model = GAMBoostModel) # }
Run the code above in your browser using DataLab