
olden(mod_in, ...)
"olden"(mod_in, x_names, y_names, out_var = NULL, bar_plot = TRUE, x_lab = NULL, y_lab = NULL, skip_wts = NULL, ...)
"olden"(mod_in, struct, ...)
"olden"(mod_in, ...)
"olden"(mod_in, ...)
"olden"(mod_in, ...)
"olden"(mod_in, ...)
neuralweights
if using the default method'Y1'
, 'Y2'
, etc. if using numeric values as weight inputs for mod_in
.ggplot
object is returned (default T
), otherwise numeric values are returnedmod_in
out_var
neuralskips
for nnet
models with skip-layer connectionsggplot
object for plotting if bar_plot = FALSE
, otherwise a data.frame
of relative importance values for each input variable.
The importance values assigned to each variable are in units that are based directly on the summed product of the connection weights. The actual values should only be interpreted based on relative sign and magnitude between explanatory variables. Comparisons between different models should not be made.
The Olden function also works with networks that have skip layers by adding the input-output connection weights to the final summed product of all input-hidden and hidden-output connections. This was not described in the original method so interpret with caution.
By default, the results are shown only for the first response variable for networks with multiple output nodes. The plotted response variable can be changed with out_var
.
Goh, A.T.C. 1995. Back-propagation neural networks for modeling complex systems. Artificial Intelligence in Engineering. 9(3):143-151.
Olden, J.D., Jackson, D.A. 2002. Illuminating the 'black-box': a randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling. 154:135-150.
Olden, J.D., Joy, M.K., Death, R.G. 2004. An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling. 178:389-397.
## using numeric input
wts_in <- c(13.12, 1.49, 0.16, -0.11, -0.19, -0.16, 0.56, -0.52, 0.81)
struct <- c(2, 2, 1) #two inputs, two hidden, one output
olden(wts_in, struct)
## using nnet
library(nnet)
data(neuraldat)
set.seed(123)
mod <- nnet(Y1 ~ X1 + X2 + X3, data = neuraldat, size = 5)
olden(mod)
## Not run:
# ## View the difference for a model w/ skip layers
#
# set.seed(123)
#
# mod <- nnet(Y1 ~ X1 + X2 + X3, data = neuraldat, size = 5, skip = TRUE)
#
# olden(mod)
#
# ## using RSNNS, no bias layers
#
# library(RSNNS)
#
# x <- neuraldat[, c('X1', 'X2', 'X3')]
# y <- neuraldat[, 'Y1']
# mod <- mlp(x, y, size = 5)
#
# olden(mod)
#
# ## using neuralnet
#
# library(neuralnet)
#
# mod <- neuralnet(Y1 ~ X1 + X2 + X3, data = neuraldat, hidden = 5)
#
# olden(mod)
#
# ## using caret
#
# library(caret)
#
# mod <- train(Y1 ~ X1 + X2 + X3, method = 'nnet', data = neuraldat, linout = TRUE)
#
# olden(mod)
#
# ## multiple hidden layers
#
# x <- neuraldat[, c('X1', 'X2', 'X3')]
# y <- neuraldat[, 'Y1']
# mod <- mlp(x, y, size = c(5, 7, 6), linOut = TRUE)
#
# olden(mod)
# ## End(Not run)
Run the code above in your browser using DataLab