randomForest
implements Breiman's random forest algorithm (based on
Breiman and Cutler's original Fortran code) for classification and
regression. It can also be used in unsupervised mode for assessing
proximities among data points.## S3 method for class 'formula':
randomForest(formula, data=NULL, ..., subset, na.action=na.fail)
## S3 method for class 'default':
randomForest(x, y=NULL, xtest=NULL, ytest=NULL, ntree=500,
mtry=if (!is.null(y) && !is.factor(y))
max(floor(ncol(x)/3), 1) else floor(sqrt(ncol(x))),
replace=TRUE, classwt=NULL, cutoff, strata,
sampsize = if (replace) nrow(x) else ceiling(.632*nrow(x)),
nodesize = if (!is.null(y) && !is.factor(y)) 5 else 1,
importance=FALSE, localImp=FALSE, nPerm=1,
proximity=FALSE, oob.prox=proximity,
norm.votes=TRUE, do.trace=FALSE,
keep.forest=!is.null(y) && is.null(xtest), corr.bias=FALSE,
keep.inbag=FALSE, ...)
## S3 method for class 'randomForest':
print(x, ...)
randomForest
is called from.print
method, an randomForest
object).randomForest
will run in unsupervised mode.x
) containing
predictors for the test set.x
)
and regression (p/3)TRUE
will override importance
.)TRUE
(default), the final result of votes
are expressed as fractions. If FALSE
, raw vote counts are
returned (useful for combining results from different runs).
Ignored for regression.TRUE
, give a more verbose output as
randomForest
is run. If set to some integer, then running
output is printed for every do.trace
trees.FALSE
, the forest will not be
retained in the output object. If xtest
is given, defaults
to FALSE
.n
by ntree
matrix be
returned that keeps track of which samples are ``in-bag'' in which
trees (but not how many times, if sampling with replacement)randomForest.default
.randomForest
, which is a list with the
following components:randomForest
regression
, classification
, or
{unsupervised}.nclass
+ 2 (for classification)
or two (for regression) columns. For classification, the first
nclass
columns are the class-specific measures computed as
mean descrease in accuracy. The nclass
+ 1st column is the
mean descrease in accuracy over all classes. The last column is the
mean decrease in Gini index. For Regression, the first column is
the mean decrease in accuracy and the second the mean decrease in MSE.
If importance=FALSE
, the last measure is still returned as a
vector.p
by nclass
+ 1
matrix corresponding to the first nclass + 1
columns
of the importance matrix. For regression, a length p
vector.NULL
if localImp=FALSE
.NULL
if
randomForest
is run in unsupervised mode or if
keep.forest=FALSE
.proximity=TRUE
when
randomForest
is called, a matrix of proximity measures among
the input (based on the frequency that pairs of data points are in
the same terminal nodes).n
.mse
/
Var(y).xtest
or additionally
ytest
arguments), this component is a list which contains the
corresponding predicted
, err.rate
, confusion
,
votes
(for classification) or predicted
, mse
and
rsq
(for regression) for the test set. If
proximity=TRUE
, there is also a component, proximity
,
which contains the proximity among the test set as well as proximity
between test and training data. Breiman, L (2002), ``Manual On Setting Up, Using, And Understanding
Random Forests V3.1'',
predict.randomForest
, varImpPlot
## Classification:
##data(iris)
set.seed(71)
iris.rf <- randomForest(Species ~ ., data=iris, importance=TRUE,
proximity=TRUE)
print(iris.rf)
## Look at variable importance:
round(importance(iris.rf), 2)
## Do MDS on 1 - proximity:
iris.mds <- cmdscale(1 - iris.rf$proximity, eig=TRUE)
op <- par(pty="s")
pairs(cbind(iris[,1:4], iris.mds$points), cex=0.6, gap=0,
col=c("red", "green", "blue")[as.numeric(iris$Species)],
main="Iris Data: Predictors and MDS of Proximity Based on RandomForest")
par(op)
print(iris.mds$GOF)
## The `unsupervised' case:
set.seed(17)
iris.urf <- randomForest(iris[, -5])
MDSplot(iris.urf, iris$Species)
## Regression:
## data(airquality)
set.seed(131)
ozone.rf <- randomForest(Ozone ~ ., data=airquality, mtry=3,
importance=TRUE, na.action=na.omit)
print(ozone.rf)
## Show "importance" of variables: higher value mean more important:
round(importance(ozone.rf), 2)
Run the code above in your browser using DataLab