randomForest (version 3.3-6)

randomForest: Classification and Regression with Random Forest

Description

randomForest implements Breiman's random forest algorithm (based on Breiman and Cutler's original Fortran code) for classification and regression. It can also be used in unsupervised mode for locating outliers or assessing proximities among data points.

Usage

## S3 method for class 'formula':
randomForest(formula, data=NULL, subset, ...)
## S3 method for class 'default':
randomForest(x, y=NULL, xtest, ytest, addclass=0, ntree=500,
  mtry=ifelse(is.null(y) || is.factor(y), max(floor(ncol(x)/3), 1),
    floor(sqrt(ncol(x)))), classwt=NULL,
    nodesize=ifelse(is.null(y) || is.factor(y), 5, 1),
    importance=FALSE,
  proximity=FALSE, outscale=FALSE, norm.votes=TRUE, do.trace=FALSE,
  keep.forest=is.null(xtest), ...)
## S3 method for class 'randomForest':
print(x, ...)

Arguments

formula
a symbolic description of the model to be fitted.
data
an optional data frame containing the variables in the model. By default the variables are taken from the environment which randomForest is called from.
subset
an index vector indicating which rows should be used.
x
a data frame or a matrix of predictors (for the print method, an randomForest object).
y
A response vector. If a factor, classification is assumed, otherwise regression is assumed. If omitted, randomForest will run in unsupervised mode with addclass=1 (unless explicitly set otherwise).
xtest
a data frame or matrix (like x) containing predictors for the test set.
ytest
response for the test set.
addclass
=0 (default) do not add a synthetic class to the data. =1 label the input data as class 1 and add a synthetic class by randomly sampling from the product of empirical marginal distributions of the input. =2
ntree
Number of trees to grow. This should not be set to too small a number, to ensure that every input row gets predicted at least a few times.
mtry
Number of variables randomly sampled as candidates at each split. Note that the default values are different for classification and regression
classwt
Priors of the classes. Need not add up to one. Ignored for regression.
nodesize
Minimum size of terminal nodes. Setting this number larger causes smaller trees to be grown (and thus take less time). Note that the default values are different for classification and regression.
importance
Should importance of predictors be assessed?
proximity
Should proximity measure among the rows be calculated? Ignored for regression.
outscale
Should outlyingness of rows be assessed? Ignored for regression.
norm.votes
If TRUE (default), the final result of votes are expressed as fractions. If FALSE, raw vote counts are returned (useful for combining results from different runs). Ignored for regression.
do.trace
If set to TRUE, give a more verbose output as randomForest is run. If set to some integer, then running output is printed for every do.trace trees.
keep.forest
If set to FALSE, the forest will not be retained in the output object. If xtest is given, defaults to FALSE.
...
optional parameters to be passed to the low level function randomForest.default.

Value

  • An object of class randomForest, which is a list with the following components:
  • callthe original call to randomForest
  • typeone of regression, classification, or {unsupervised}.
  • predictedthe predicted values of the input data based on out-of-bag samples.
  • importancefor classification problem, a matrix with four columns, each a different measure of importance of the predictors; for regression problem, a vector (NULL if importance=FALSE when randomForest is called, this component is set to NULL).
  • ntreenumber of trees grown.
  • mtrynumber of predictors sampled for spliting at each node.
  • forest(a list that contains the entire forest; NULL if randomForest is run in unsupervised mode or if keep.forest=FALSE.
  • For classification problem, the following are also included:
  • err.ratefinal error rate of the prediction on the input data.
  • confusionthe confusion matrix of the prediction.
  • votesa matrix with one row for each input data point and one column for each class, giving the fraction or number of `votes' from the random forest.
  • proximityif proximity=TRUE when randomForest is called, a matrix of proximity measures among the input (based on the frequency that pairs of data points are in the same terminal nodes).
  • outlierif outscale=TRUE when randomForest is called, a vector indicating how outlying the data points are (based on the proximity measures).
  • For regression problem, the following are included:
  • msemean square error: sum of squared residuals divided by n.
  • rsq``pseudo R-squared'': 1 - mse / Var(y).
  • If test set is given (through the xtest or additionally ytest arguments), then there is also a test component, which is a list with the following components:
  • predictedpredicted classes/values for the test set.
  • confusion(for classification) if ytest is given, confusion matrix for the test set.
  • votes(for classification) the vote counts for the test set.
  • mse(for regression) if ytest is given, the test set MSE.
  • rsq(for regression) if ytest is given, the test set ``pseudo R-squared'' (1 - MSE/var(y), where var(y) is the variance of ytest using n as divisor).
  • Note: The forest structure is slightly different between classification and regression.

References

Breiman, L. (2001), Random Forests, Machine Learning 45(1), 5-32.

Breiman, L (2002), ``Manual On Setting Up, Using, And Understanding Random Forests V3.1'', http://oz.berkeley.edu/users/breiman/ Using_random_forests_V3.1.pdf.

See Also

predict.randomForest

Examples

Run this code
## Classification:
data(iris)
set.seed(71)
iris.rf <- randomForest(Species ~ ., data=iris, importance=TRUE,
                        proximity=TRUE)
print(iris.rf)
## Look at variable importance:
print(round(iris.rf$importance, 2))
## Do MDS on 1 - proximity:
library(mva)
iris.mds <- cmdscale(1 - iris.rf$proximity)
pairs(cbind(iris[,1:4], iris.mds), cex=0.6, gap=0.2, 
      col=c("red", "green", "blue")[codes(iris$Species)],
      main="Iris Data: Predictors and MDS of Proximity Based on RandomForest")
## Examine the stress of MDS:
print( sum((as.dist(1 - iris.rf$proximity) - dist(iris.mds))^2) /
       sum((as.dist(1 - iris.rf$proximity)^2)) )

## The `unsupervised' case:
set.seed(17)
iris.urf <- randomForest(iris[, -5], proximity=TRUE, outscale=TRUE)
## Look for Outliers:
plot(iris.urf$out, type="h", ylab="",
     main="Measure of Outlyingness for Iris Data")

## Regression:
data(airquality)
set.seed(131)
ozone.rf <- randomForest(Ozone ~ ., data=airquality, mtry=3, importance=TRUE)
print(ozone.rf)
## Show "importance" of variables: higher value mean more important:
print(round(ozone.rf$importance, 2))

Run the code above in your browser using DataCamp Workspace