A General Framework For Bagging
bag provides a framework for bagging classification or regression models. The user can provide their own functions for model building, prediction and aggregation of predictions (see Details below).
## S3 method for class 'default': bag(x, y, B = 10, vars = ncol(x), bagControl = NULL, ...)
bagControl(fit = NULL, predict = NULL, aggregate = NULL, downSample = FALSE, oob = TRUE, allowParallel = TRUE)
ldaBag plsBag nbBag ctreeBag svmBag nnetBag
## S3 method for class 'bag': predict(object, newdata = NULL, ...)
- a matrix or data frame of predictors
- a vector of outcomes
- the number of bootstrap samples to train over.
- a list of options.
- arguments to pass to the model function
- a function that has arguments
...and produces a model object that can later be used for prediction. Example functions are found in
- a function that generates predictions for each sub-model. The function should have arguments
x. The output of the function can be any type of object (see the example below where posterior probabilities are generated. E
- a function with arguments
type. The function that takes the output of the
predictfunction and reduces the bagged predictions to a single prediction per sample. the
typeargument can be used to swi
- a logical: for classification, should the data set be randomly sampled so that each class has the same number of samples as the smallest class?
- a logical: should out-of-bag statistics be computed and the predictions retained?
- if a parallel backend is loaded and available, should the function use it?
- an integer. If this argument is not
NULL, a random sample of size
varsis taken of the predictors in each bagging iteration. If
NULL, all predictors are used.
- an object of class
- a matrix or data frame of samples for prediction. Note that this argument must have a non-null value
The function is basically a framework where users can plug in any model in to assess the effect of bagging. Examples functions can be found in
nnetBag. Each has elements
One note: when
vars is not
NULL, the sub-setting occurs prior to the
predict functions are called. In this way, the user probably does not need to account for the change in predictors in their functions.
train, classification models should use
type = "prob" inside of the
predict function so that
predict.train(object, newdata, type = "prob") will work.
If a parallel backend is registered, the
bagproduces an object of class
fits a list with two sub-objects: the
fitobject has the actual model fit for that bagged samples and the
varsobject is either
NULLor a vector of integers corresponding to which predictors were sampled for that model
control a mirror of the arguments passed into
call the call B the number of bagging iterations dims the dimensions of the training set
## A simple example of bagging conditional inference regression trees: data(BloodBrain) ## treebag <- bag(bbbDescr, logBBB, B = 10, ## bagControl = bagControl(fit = ctreeBag$fit, ## predict = ctreeBag$pred, ## aggregate = ctreeBag$aggregate)) ## An example of pooling posterior probabilities to generate class predictions data(mdrr) ## remove some zero variance predictors and linear dependencies mdrrDescr <- mdrrDescr[, -nearZeroVar(mdrrDescr)] mdrrDescr <- mdrrDescr[, -findCorrelation(cor(mdrrDescr), .95)] ## basicLDA <- train(mdrrDescr, mdrrClass, "lda") ## bagLDA2 <- train(mdrrDescr, mdrrClass, ## "bag", ## B = 10, ## bagControl = bagControl(fit = ldaBag$fit, ## predict = ldaBag$pred, ## aggregate = ldaBag$aggregate), ## tuneGrid = data.frame(vars = c((1:10)*10 , ncol(mdrrDescr))))