"summary"(object, cBars=length(object$var.names), n.trees=object$n.trees, plotit=TRUE, order=TRUE, method=relative.influence, normalize=TRUE, ...)
gbm
object created from an initial call to
gbm
.order=TRUE
the only the
variables with the cBars
largest relative influence will appear in the
barplot. If order=FALSE
then the first cBars
variables will
appear in the plot. In either case, the function will return the relative
influence of all of the variables.n.trees
trees will be used.relative.influence
is the default and is the same as that
described in Friedman (2001). The other current (and experimental) choice is
permutation.test.gbm
. This method randomly permutes each predictor
variable at a time and computes the associated reduction in predictive
performance. This is similar to the variable importance measures Breiman uses
for random forests, but gbm
currently computes using the entire training
dataset (not the out-of-bag observations).FALSE
then summary.gbm
returns the
unnormalized influence. distribution="gaussian"
this returns exactly the reduction
of squared error attributable to each variable. For other loss functions this
returns the reduction attributeable to each varaible in sum of squared error in
predicting the gradient on each iteration. It describes the relative influence
of each variable in reducing the loss function. See the references below for
exact details on the computation.
L. Breiman (2001).Random Forests.
gbm