Fits the AdaBoost.M1 (Freund and Schapire, 1996) and SAMME (Zhu et al., 2009) algorithms using classification trees as single classifiers.
AdaBoostModel(
boos = TRUE,
mfinal = 100,
coeflearn = c("Breiman", "Freund", "Zhu"),
minsplit = 20,
minbucket = round(minsplit/3),
cp = 0.01,
maxcompete = 4,
maxsurrogate = 5,
usesurrogate = 2,
xval = 10,
surrogatestyle = 0,
maxdepth = 30
)
if TRUE
, then bootstrap samples are drawn from the
training set using the observation weights at each iteration. If
FALSE
, then all observations are used with their weights.
number of iterations for which boosting is run.
learning algorithm.
minimum number of observations that must exist in a node in order for a split to be attempted.
minimum number of observations in any terminal node.
complexity parameter.
number of competitor splits retained in the output.
number of surrogate splits retained in the output.
how to use surrogates in the splitting process.
number of cross-validations.
controls the selection of a best surrogate.
maximum depth of any node of the final tree, with the root node counted as depth 0.
MLModel
class object.
factor
mfinal
, maxdepth
, coeflearn
*
* included only in randomly sampled grid points
Further model details can be found in the source link below.
# NOT RUN {
fit(Species ~ ., data = iris, model = AdaBoostModel(mfinal = 5))
# }
Run the code above in your browser using DataLab