rfe
Backwards Feature Selection
A simple backwards selection, a.k.a. recursive feature selection (RFE), algorithm
- Keywords
- models
Usage
rfe(x, ...)## S3 method for class 'default':
rfe(x, y,
sizes = 2^(2:4),
metric = ifelse(is.factor(y), "Accuracy", "RMSE"),
maximize = ifelse(metric == "RMSE", FALSE, TRUE),
rfeControl = rfeControl(),
...)
rfeIter(x, y,
testX, testY,
sizes,
rfeControl = rfeControl(),
...)
Arguments
- x
- a matrix or data frame of predictors for model training. This object must have unique column names.
- y
- a vector of training set outcomes (either numeric or factor)
- testX
- a matrix or data frame of test set predictors. This must have the same column names as
x
- testY
- a vector of test set outcomes
- sizes
- a numeric vector of integers corresponding to the number of features that should be retained
- metric
- a string that specifies what summary metric will be used to select the optimal model. By default, possible values are "RMSE" and "Rsquared" for regression and "Accuracy" and "Kappa" for classification. If custom performance metrics are used (via the
- maximize
- a logical: should the metric be maximized or minimized?
- rfeControl
- a list of options, including functions for fitting and prediction. See the package vignette or
rfeControl
for examples - ...
- options to pass to the model fitting function
Details
This function implements backwards selection of predictors based on predictor importance ranking. The predictors are ranked and the less important ones are sequentially eliminated. The package vignette for feature selection has detailed descriptions of the algorithms.
rfeIter
is the basic algorithm while rfe
wraps these operations inside of resampling. To avoid selection bias, it is better to use the function rfe
than rfeIter
.
Value
- A list with elements
finalVariables a list of size length(sizes) + 1
containing the column names of the ``surviving'' predictors at each stage of selection. The first element corresponds to all the predictors (i.e.size = ncol(x)
)pred a data frame with columns for the test set outcome, the predicted outcome and the subset size.
See Also
Examples
data(BloodBrain)
x <- scale(bbbDescr[,-nearZeroVar(bbbDescr)])
x <- x[, -findCorrelation(cor(x), .8)]
x <- as.data.frame(x)
set.seed(1)
lmProfile <- rfe(x, logBBB,
sizes = c(2:25, 30, 35, 40, 45, 50, 55, 60, 65),
rfeControl = rfeControl(functions = lmFuncs,
number = 200))
set.seed(1)
lmProfile2 <- rfe(x, logBBB,
sizes = c(2:25, 30, 35, 40, 45, 50, 55, 60, 65),
rfeControl = rfeControl(functions = lmFuncs,
rerank = TRUE,
number = 200))
xyplot(lmProfile$results$RMSE + lmProfile2$results$RMSE +
rfProfile$results$RMSE + rfProfile2$results$RMSE ~
lmProfile$results$Variables,
type = c("g", "p", "l"),
auto.key = TRUE)
rfProfile <- rfe(x, logBBB,
sizes = c(2, 5, 10, 20),
rfeControl = rfeControl(functions = rfFuncs))
bagProfile <- rfe(x, logBBB,
sizes = c(2, 5, 10, 20),
rfeControl = rfeControl(functions = treebagFuncs))
set.seed(1)
svmProfile <- rfe(x, logBBB,
sizes = c(5, 20, 65),
rfeControl = rfeControl(functions = caretFuncs,
number = 200),
## pass options to train()
method = "svmRadial",
fit = FALSE)
## classification with no resampling
data(mdrr)
mdrrDescr <- scale(mdrrDescr[,-nearZeroVar(mdrrDescr)])
mdrrDescr <- mdrrDescr[, -findCorrelation(cor(mdrrDescr), .8)]
set.seed(1)
inTrain <- createDataPartition(mdrrClass, p = .75, list = FALSE)[,1]
train <- mdrrDescr[ inTrain, ]
test <- mdrrDescr[-inTrain, ]
trainClass <- mdrrClass[ inTrain]
testClass <- mdrrClass[-inTrain]
preProc <- preProcess(train)
train <- predict(preProc, train)
test <- predict(preProc, test)
nbProfile <- rfeIter(train, trainClass,
test, testClass,
sizes = c(1:10, 15, 30),
rfeControl = rfeControl(functions = nbFuncs))
splitUp <- split(nbProfile$pred,
factor(nbProfile$pred$subset))
testResults <- lapply(splitUp,
function(u) postResample(u$pred, u$obs))
Variables <- as.numeric(names(testResults))
testResults <- do.call("rbind", testResults)
testResults <- cbind(testResults, Variables)
plot(testResults[,3], testResults[,1])
#######################################
## Parallel Processing Example via MPI
## A function to emulate lapply in parallel
mpiClacs <- function(X, FUN, ...)
{
theDots <- list(...)
parLapply(theDots$cl, X, FUN)
}
library(snow)
cl <- makeCluster(5, "MPI")
set.seed(1)
lmProfile <- rfe(x, logBBB,
sizes = c(2:25, 30, 35, 40, 45, 50, 55, 60, 65),
rfeControl = rfeControl(functions = lmFuncs,
number = 200,
workers = 5,
computeFunction = mpiClacs,
computeArgs = list(cl = cl)))
stopCluster(cl)
#######################################
## Parallel Processing Example via NWS
nwsClacs <- function(X, FUN, ...)
{
theDots <- list(...)
eachElem(theDots$sObj,
fun = FUN,
elementArgs = list(X))
}
library(nws)
sObj <- sleigh(workerCount = 5)
set.seed(1)
lmProfile <- rfe(x, logBBB,
sizes = c(2:25, 30, 35, 40, 45, 50, 55, 60, 65),
rfeControl = rfeControl(functions = lmFuncs,
number = 200,
workers = 5,
computeFunction = nwsClacs,
computeArgs = list(sObj = sObj)))
close(sObj)