## S3 method for class 'gbm':
summary(object,
cBars=length(object$var.names),
n.trees=object$n.trees,
plotit=TRUE,
order=TRUE,
method=relative.influence,
normalize=TRUE,
...)
gbm
object created from an initial call to
gbm
.order=TRUE
the only the
variables with the cBars
largest relative influence will appear in the
barplot. If order=FALSE
then the first cBars
variables will
appear in the n.trees
trees will be used.relative.influence
is the default and is the same as that
described in Friedman (2001). The other current (and experimental) choice is
FALSE
then summary.gbm
returns the
unnormalized influence.distribution="gaussian"
this returns exactly the reduction
of squared error attributable to each variable. For other loss functions this
returns the reduction attributeable to each varaible in sum of squared error in
predicting the gradient on each iteration. It describes the relative influence
of each variable in reducing the loss function. See the references below for
exact details on the computation.gbm