train
Fit Predictive Models over Different Tuning Parameters
This function sets up a grid of tuning parameters for a number of classification and regression routines, fits each model and calculates a resampling based performance measure.
- Keywords
- models
Usage
train(x, ...)## S3 method for class 'default':
train(x, y,
method = "rf",
...,
weights = NULL,
metric = ifelse(is.factor(y), "Accuracy", "RMSE"),
maximize = ifelse(metric == "RMSE", FALSE, TRUE),
trControl = trainControl(),
tuneGrid = NULL,
tuneLength = 3)
## S3 method for class 'formula':
train(form, data, ..., weights, subset, na.action, contrasts = NULL)
Arguments
- x
- a data frame containing training data where samples are in rows and features are in columns.
- y
- a numeric or factor vector containing the outcome for each sample.
- form
- A formula of the form
y ~ x1 + x2 + ...
- data
- Data frame from which variables specified in
formula
are preferentially to be taken. - weights
- a numeric vector of case weights. This argument will only affect models that allow case weights.
- subset
- An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.)
- na.action
- A function to specify the action to be taken if NAs are found. The default action is for the procedure to fail. An alternative is na.omit, which leads to rejection of cases with missing values on any required variable. (NOTE: If given, this argument must
- contrasts
- a list of contrasts to be used for some or all of the factors appearing as variables in the model formula.
- method
- a string specifying which classification or regression model to use. Possible values are:
ada
,bag
,bagEarth
,bagFDA
,blackboost
,cforest
,ctree
,ctree2
- ...
- arguments passed to the classification or regression routine (such as
randomForest
). Errors will occur if values for tuning parameters are passed here. - metric
- a string that specifies what summary metric will be used to select the optimal model. By default, possible values are "RMSE" and "Rsquared" for regression and "Accuracy" and "Kappa" for classification. If custom performance metrics are used (via the
- maximize
- a logical: should the metric be maximized or minimized?
- trControl
- a list of values that define how this function acts. See
trainControl
. (NOTE: If given, this argument must be named.) - tuneGrid
- a data frame with possible tuning values. The columns are named the same as the tuning parameters in each
method preceded by a period (e.g. .decay, .lambda). Also, a function can be passed to
tuneGrid
with arguments calledlen<
- tuneLength
- an integer denoting the number of levels for each tuning parameters that should be
generated by
createGrid
. (NOTE: If given, this argument must be named.)
Details
train
can be used to tune models by picking the complexity parameters that are associated with the optimal resampling statistics. For particular model, a grid of parameters (if any) is created and the model is trained on slightly different data for each candidate combination of tuning parameters. Across each data set, the performance of held-out samples is calculated and the mean and standard deviation is summarized for each combination. The combination with the optimal resampling statistic is chosen as the final model and the entire training set is used to fit a final model.
A variety of models are currently available. The table below enumerates the models and the values of the method
argument, as well as the complexity parameters used by train
.
method
Value Package Tuning Parameter(s)
Generalized linear model glm
glmStepAIC
rpart
maxdepth
ctree
mincriterion
ctree2
maxdepth
Boosted trees gbm
interaction depth
,
n.trees
, shrinkage
blackboost
maxdepth
, mstop
ada
maxdepth
, iter
, nu
Boosted regression models glmboost
mstop
gamboost
mstop
logitBoost
nIter
Random forests rf
mtry
parRF
mtry
cforest
mtry
Bagging treebag
bag
vars
Other Trees nodeHarvest
maxinter
, node
partDSA
cut.off.growth
, MPD
Logic Regression logreg
ntrees
, code{treesize}
Elastic net (glm) glmnet
alpha
, lambda
Neural networks nnet
decay
, size
neuralnet
layer1
, layer2
, layer3
pcaNNet
decay
, size
Projection pursuit regression ppr
nterms
Principal component regression pcr
ncomp
Independent component regression icr
n.comp
Partial least squares pls
ncomp
Sparse partial least squares spls
K
, eta
, kappa
Support vector machines svmLinear
C
svmRadial
sigma
, C
svmPoly
scale
, degree
, C
Relevance vector machines rvmLinear
rvmRadial
sigma
rvmPoly
scale
, degree
Least squares support vector machines lssvmRadial
sigma
Gaussian processes guassprLinearl
guassprRadial
sigma
guassprPoly
scale
, degree
Linear least squares lm
lmStepAIC
rlm
earth
degree
, nprune
Bagged MARS bagEarth
degree
, nprune
Rule Based Regression M5Rules
pruned
Penalized linear models penalized
lambda1
, lambda2
enet
lambda
, fraction
lars
fraction
lars2
steps
enet
fraction
foba
lambda
, k
Supervised principal components superpc
n.components
, threshold
Quantile Regression Forests qrf
mtry
Linear discriminant analysis lda
Linda
qda
QdaCov
slda
hda
newdim
, lambda
, gamma
Stepwise discriminant analysis stepLDA
maxvar
, direction
stepQDA
maxvar
, direction
Stepwise diagonal discriminant analysis sddaLDA
sddaQDA
sda
diagonal
Sparse linear discriminant analysis sparseLDA
NumVars
, lambda
Regularized discriminant analysis rda
lambda
, gamma
Mixture discriminant analysis mda
subclasses
Sparse mixture discriminant analysis smda
NumVars
, R
, lambda
Penalized discriminant analysis pda
lambda
pda2
df
Stabilised linear discriminant analysis slda
hdda
model
, threshold
Flexible discriminant analysis (MARS) fda
degree
, nprune
Bagged FDA bagFDA
degree
, nprune
Logistic/multinomial regression multinom
decay
Penalized logistic regression plr
lambda
, cp
Rule--based classification J48
C
OneR
PART
threshold
, pruned
JRip
NumOpt
Logic Forests logforest
vbmpRadial
estimateTheta
k nearest neighbors knn3
k
Nearest shrunken centroids pam
threshold
scrda
alpha
, delta
Naive Bayes nb
usekernel
Generalized partial least squares gpls
K.prov
Learned vector quantization lvq
k
ROC Curves rocc
rocc
xgenes
}
By default, the function createGrid
is used to define the candidate values of the tuning parameters. The user can also specify their own. To do this, a data fame is created with columns for each tuning parameter in the model. The column names must be the same as those listed in the table above with a leading dot. For example, ncomp
would have the column heading .ncomp
. This data frame can then be passed to createGrid
.
In some cases, models may require control arguments. These can be passed via the three dots argument. Note that some models can specify tuning parameters in the control objects. If specified, these values will be superseded by those given in the createGrid
argument.
The vignette entitled "caret Manual -- Model Building" has more details and examples related to this function.
train
can be used with "explicit parallelism", where different resamples (e.g. cross-validation group) can be split up and run on multiple machines or processors. By default, train
will use a single processor on the host machine. To use more, the computeFunction
and computeArgs
arguments in trainControl
can be used. computeFunction
is used to pass a function that takes arguments named X
and FUN
. Internally, train
will pass the data and modeling functions through using these arguments. By default, train
uses lapply
. Alternatively, any function that emulates lapply
but distributes jobs across multiple machines/processors can be used. Arguments to such a function can be passed (if needed) via the computeArgs
argument in trainControl
. Examples are given below using the
Value
- A list is returned of class
train
containing: modelType an identifier of the model type. results a data frame the training error rate and values of the tuning parameters. call the (matched) function call with dots expanded dots a list containing any ... values passed to the original call metric a string that specifies what summary metric will be used to select the optimal model. trControl the list of control parameters. finalModel an fit object using the best parameters trainingData a data frame resample A data frame with columns for each performance metric. Each row corresponds to each resample. If leave-one-out cross-validation or out-of-bag estimation methods are requested, this will be NULL
. ThereturnResamp
argument oftrainControl
controls how much of the resampled results are saved.perfNames a character vector of performance metrics that are produced by the summary function maximize a logical recycled from the function arguments.
References
Kuhn (2008), ``Building Predictive Models in R Using the caret'' (
See Also
Examples
#######################################
## Classification Example
data(iris)
TrainData <- iris[,1:4]
TrainClasses <- iris[,5]
knnFit1 <- train(TrainData, TrainClasses,
"knn",
tuneLength = 10,
trControl = trainControl(method = "cv"))
knnFit2 <- train(TrainData, TrainClasses,
"knn", tuneLength = 10,
trControl = trainControl(method = "boot"))
library(MASS)
nnetFit <- train(TrainData, TrainClasses,
"nnet",
tuneLength = 2,
trace = FALSE,
maxit = 100)
#######################################
## Regression Example
library(mlbench)
data(BostonHousing)
lmFit <- train(medv ~ . + rm:lstat,
data = BostonHousing,
"lm")
library(rpart)
rpartFit <- train(medv ~ .,
data = BostonHousing,
"rpart",
tuneLength = 9)
#######################################
## Example with a custom metric
madSummary <- function (data,
lev = NULL,
model = NULL)
{
out <- mad(data$obs - data$pred,
na.rm = TRUE)
names(out) <- "MAD"
out
}
robustControl <- trainControl(summaryFunction = madSummary)
marsGrid <- expand.grid(.degree = 1,
.nprune = (1:10) * 2)
earthFit <- train(medv ~ .,
data = BostonHousing,
"earth",
tuneGrid = marsGrid,
metric = "MAD",
maximize = FALSE,
trControl = robustControl)
#######################################
## Parallel Processing Example via MPI
## A function to emulate lapply in parallel
mpiCalcs <- function(X, FUN, ...)
{
theDots <- list(...)
parLapply(theDots$cl, X, FUN)
}
library(snow)
cl <- makeCluster(5, "MPI")
## 50 bootstrap models distributed across 5 workers
mpiControl <- trainControl(workers = 5,
number = 50,
computeFunction = mpiCalcs,
computeArgs = list(cl = cl))
set.seed(1)
usingMPI <- train(medv ~ .,
data = BostonHousing,
"glmboost",
trControl = mpiControl)
################################################
## Parallel Random Forest using foreach and doMPI
library(doMPI)
cl <- startMPIcluster(count = 5, verbose = TRUE)
registerDoMPI(cl)
rfMPI <- train(medv ~ .,
data = BostonHousing,
"parRF")
closeCluster(cl)
#######################################
## Parallel Processing Example via NWS
nwsCalcs <- function(X, FUN, ...)
{
theDots <- list(...)
eachElem(theDots$sObj,
fun = FUN,
elementArgs = list(X))
}
library(nws)
sObj <- sleigh(workerCount = 5)
nwsControl <- trainControl(workers = 5,
number = 50,
computeFunction = nwsCalcs,
computeArgs = list(sObj = sObj))
set.seed(1)
usingNWS <- train(medv ~ .,
data = BostonHousing,
"glmboost",
trControl = nwsControl)
close(sObj)
#######################################
## Parallel Random Forest Models using
## the foreach package and MPI
library(doMPI)
cl <- startMPIcluster(2)
registerDoMPI(cl)
set.seed(1)
parallelRF <- train(medv ~ .,
data = BostonHousing,
"parRF")
closeCluster(cl)