"summary"(object, cBars=length(object$var.names), n.trees=object$n.trees, plotit=TRUE, order=TRUE, method=relative.influence, normalize=TRUE, ...)erboost object created from an initial call to
erboost.order=TRUE the only the
variables with the cBars largest relative influence will appear in the
barplot. If order=FALSE then the first cBars variables will
appear in the plot. In either case, the function will return the relative
influence of all of the variables.n.trees trees will be used.relative.influence is the default and is the same as that
described in Friedman (2001). The other current (and experimental) choice is
permutation.test.erboost. This method randomly permutes each predictor
variable at a time and computes the associated reduction in predictive
performance. This is similar to the variable importance measures Breiman uses
for random forests, but erboost currently computes using the entire training
dataset (not the out-of-bag observations.FALSE then summary.erboost returns the
unnormalized influence. G. Ridgeway (1999). The state of boosting, Computing Science and Statistics 31:172-181.
https://cran.r-project.org/package=gbm
J.H. Friedman (2001). "Greedy Function Approximation: A Gradient Boosting Machine," Annals of Statistics 29(5):1189-1232.
erboost