
Last chance! 50% off unlimited learning
Sale ends in
train(x, ...)## S3 method for class 'default':
train(x, y,
method = "rf",
...,
weights = NULL,
metric = ifelse(is.factor(y), "Accuracy", "RMSE"),
maximize = ifelse(metric == "RMSE", FALSE, TRUE),
trControl = trainControl(),
tuneGrid = NULL,
tuneLength = 3)
## S3 method for class 'formula':
train(form, data, ..., weights, subset, na.action, contrasts = NULL)
y ~ x1 + x2 + ...
formula
are preferentially to be taken.ada
, bagEarth
, bagFDA
, blackboost
, cforest
, ctree
, ctree2
, earth
randomForest
). Errors will occur if values
for tuning parameters are passed here.trainControl
. (NOTE: If given, this argument must be named.)createGrid
in this createGrid
. (NOTE: If given, this argument must be named.)train
containing:NULL
. The returnResamp
argument of trainControl
controls how much of the resampled results are saved.train
can be used to tune models by picking the complexity parameters that are associated with the optimal resampling statistics. For particular model, a grid of parameters (if any) is created and the model is trained on slightly different data for each candidate combination of tuning parameters. Across each data set, the performance of held-out samples is calculated and the mean and standard deviation is summarized for each combination. The combination with the optimal resampling statistic is chosen as the final model and the entire training set is used to fit a final model.A variety of models are currently available. The table below enumerates the models and the values of the method
argument, as well as the complexity parameters used by train
.
method
Value Package Tuning Parameter(s)
Generalized linear model glm
rpart
maxdepth
ctree
mincriterion
ctree2
maxdepth
Boosted trees gbm
interaction depth
,
n.trees
, shrinkage
blackboost
maxdepth
, mstop
ada
maxdepth
, iter
, nu
Boosted regression models glmboost
mstop
gamboost
mstop
logitBoost
nIter
Random forests rf
mtry
parRF
mtry
cforest
mtry
Bagged trees treebag
nodeHarvest
maxinter
, node
Elastic net (glm) glmnet
alpha
, lambda
Neural networks nnet
decay
, size
pcaNNet
decay
, size
Projection pursuit regression ppr
nterms
Principal component regression pcr
ncomp
Partial least squares pls
ncomp
Sparse partial least squares spls
K
, eta
, kappa
Support vector machines svmLinear
C
svmRadial
sigma
, C
svmPoly
scale
, degree
, C
Relevance vector machines rvmLinear
rvmRadial
sigma
rvmPoly
scale
, degree
Least squares support vector machines lssvmRadial
sigma
Gaussian processes guassprLinearl
guassprRadial
sigma
guassprPoly
scale
, degree
Linear least squares lm
rlm
earth
degree
, nprune
Bagged MARS bagEarth
degree
, nprune
Rule Based Regression M5Rules
pruned
Elastic net enet
lambda
, fraction
Least Angle Regression lars
fraction
lars2
steps
The Lasso enet
fraction
Penalized linear models penalized
lambda1
, lambda2
Supervised principal components superpc
n.components
, threshold
Linear discriminant analysis lda
Linda
qda
QdaCov
slda
stepLDA
maxvar
, direction
stepQDA
maxvar
, direction
Stepwise diagonal discriminant analysis sddaLDA
sddaQDA
sda
diagonal
Sparse linear discriminant analysis sparseLDA
NumVars
, lambda
Regularized discriminant analysis rda
lambda
, gamma
Mixture discriminant analysis mda
subclasses
Sparse mixture discriminant analysis smda
NumVars
, R
, lambda
Penalized discriminant analysis pda
lambda
pda2
df
Stabilised linear discriminant analysis slda
fda
degree
, nprune
Bagged FDA bagFDA
degree
, nprune
Logistic/multinomial regression multinom
decay
Penalized logistic regression plr
lambda
, cp
Rule--based classification J48
C
OneR
PART
threshold
, pruned
JRip
NumOpt
Bayesian multinomial probit model vbmpRadial
estimateTheta
k nearest neighbors knn3
k
Nearest shrunken centroids pam
threshold
Naive Bayes nb
usekernel
Generalized partial least squares gpls
K.prov
Learned vector quantization lvq
k
}
By default, the function createGrid
is used to define the candidate values of the tuning parameters. The user can also specify their own. To do this, a data fame is created with columns for each tuning parameter in the model. The column names must be the same as those listed in the table above with a leading dot. For example, ncomp
would have the column heading .ncomp
. This data frame can then be passed to createGrid
.
In some cases, models may require control arguments. These can be passed via the three dots argument. Note that some models can specify tuning parameters in the control objects. If specified, these values will be superseded by those given in the createGrid
argument.
The vignette entitled "caret Manual -- Model Building" has more details and examples related to this function.
train
can be used with "explicit parallelism", where different resamples (e.g. cross-validation group) can be split up and run on multiple machines or processors. By default, train
will use a single processor on the host machine. To use more, the computeFunction
and computeArgs
arguments in trainControl
can be used. computeFunction
is used to pass a function that takes arguments named X
and FUN
. Internally, train
will pass the data and modeling functions through using these arguments. By default, train
uses lapply
. Alternatively, any function that emulates lapply
but distributes jobs across multiple machines/processors can be used. Arguments to such a function can be passed (if needed) via the computeArgs
argument in trainControl
. Examples are given below using the
trainControl
, createGrid
, createFolds
#######################################
## Classification Example
data(iris)
TrainData <- iris[,1:4]
TrainClasses <- iris[,5]
knnFit1 <- train(TrainData, TrainClasses,
"knn",
tuneLength = 10,
trControl = trainControl(method = "cv"))
knnFit2 <- train(TrainData, TrainClasses,
"knn", tuneLength = 10,
trControl = trainControl(method = "boot"))
library(MASS)
nnetFit <- train(TrainData, TrainClasses,
"nnet",
tuneLength = 2,
trace = FALSE,
maxit = 100)
#######################################
## Regression Example
library(mlbench)
data(BostonHousing)
lmFit <- train(medv ~ . + rm:lstat,
data = BostonHousing,
"lm")
library(rpart)
rpartFit <- train(medv ~ .,
data = BostonHousing,
"rpart",
tuneLength = 9)
#######################################
## Example with a custom metric
madSummary <- function (data,
lev = NULL,
model = NULL)
{
out <- mad(data$obs - data$pred,
na.rm = TRUE)
names(out) <- "MAD"
out
}
robustControl <- trainControl(summaryFunction = madSummary)
marsGrid <- expand.grid(.degree = 1,
.nprune = (1:10) * 2)
earthFit <- train(medv ~ .,
data = BostonHousing,
"earth",
tuneGrid = marsGrid,
metric = "MAD",
maximize = FALSE,
trControl = robustControl)
#######################################
## Parallel Processing Example via MPI
## A function to emulate lapply in parallel
mpiCalcs <- function(X, FUN, ...)
{
theDots <- list(...)
parLapply(theDots$cl, X, FUN)
}
library(snow)
cl <- makeCluster(5, "MPI")
## 50 bootstrap models distributed across 5 workers
mpiControl <- trainControl(workers = 5,
number = 50,
computeFunction = mpiCalcs,
computeArgs = list(cl = cl))
set.seed(1)
usingMPI <- train(medv ~ .,
data = BostonHousing,
"glmboost",
trControl = mpiControl)
################################################
## Parallel Random Forest using foreach and doMPI
library(doMPI)
cl <- startMPIcluster(count = 5, verbose = TRUE)
registerDoMPI(cl)
rfMPI <- train(medv ~ .,
data = BostonHousing,
"parRF")
closeCluster(cl)
#######################################
## Parallel Processing Example via NWS
nwsCalcs <- function(X, FUN, ...)
{
theDots <- list(...)
eachElem(theDots$sObj,
fun = FUN,
elementArgs = list(X))
}
library(nws)
sObj <- sleigh(workerCount = 5)
nwsControl <- trainControl(workers = 5,
number = 50,
computeFunction = nwsCalcs,
computeArgs = list(sObj = sObj))
set.seed(1)
usingNWS <- train(medv ~ .,
data = BostonHousing,
"glmboost",
trControl = nwsControl)
close(sObj)
#######################################
## Parallel Random Forest Models using
## the foreach package and MPI
library(doMPI)
cl <- startMPIcluster(2)
registerDoMPI(cl)
set.seed(1)
parallelRF <- train(medv ~ .,
data = BostonHousing,
"parRF")
closeCluster(cl)
Run the code above in your browser using DataLab