Learn R Programming

SEMdeep (version 0.1.0)

getConnectionWeight: Connection Weight Approach for neural network variable importance

Description

The function computes the product of the raw input-hidden and hidden-output connection weights between each input and output neuron and sums the products across all hidden neurons, as proposed by Olden (2004).

Usage

getConnectionWeight(object, thr = NULL, verbose = FALSE, ...)

Value

A list od two object: (i) a data.frame including the connections together with their weights, and (ii) the DAG with colored edges. If abs(W) > thr and W < 0, the edge is inhibited and it is highlighted in blue; otherwise, if abs(W) > thr and W > 0, the edge is activated and it is highlighted in red.

Arguments

object

A neural network object from SEMdnn() function.

thr

A value of the threshold to apply to connection weights. If NULL (default), the threshold is set to thr=mean(abs(connection weights)).

verbose

A logical value. If FALSE (default), the processed graph will not be plotted to screen.

...

Currently ignored.

Author

Mario Grassi mario.grassi@unipv.it

Details

In a neural network, the connections between inputs and outputs are represented by the connection weights between the neurons. The importance values assigned to each input variable using the Olden method are in units that are based directly on the summed product of the connection weights. The amount and direction of the link weights largely determine the proportional contributions of the input variables to the neural network's prediction output. Input variables with larger connection weights indicate higher intensities of signal transfer and are therefore more important in the prediction process. Positive connection weights represent excitatory effects on neurons (raising the intensity of the incoming signal) and increase the value of the predicted response, while negative connection weights represent inhibitory effects on neurons (reducing the intensity of the incoming signal). The weights that change sign (e.g., positive to negative) between the input-hidden to hidden-output layers would have a cancelling effect, and vice versa weights with the same sign would have a synergistic effect. Note that in order to map the connection weights to the DAG edges, the element-wise product, W*A is performed between the Olden's weights entered in a matrix, W(pxp) and the binary (1,0) adjacency matrix, A(pxp) of the input DAG.

References

Olden, Julian & Jackson, Donald. (2002). Illuminating the "black box": A randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling. 154. 135-150. 10.1016/S0304-3800(02)00064-9.

Olden, Julian. (2004). An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling. 178. 10.1016/S0304-3800(04)00156-5.

Examples

Run this code

# \donttest{
if (torch::torch_is_installed()){

# load ALS data
ig<- alsData$graph
data<- alsData$exprs
data<- transformData(data)$data

dnn0 <- SEMdnn(ig, data, train=1:nrow(data), cowt = FALSE,
			#loss = "mse", hidden = 5*K, link = "selu",
			loss = "mse", hidden = c(10, 10, 10), link = "selu",
			validation = 0, bias = TRUE, lr = 0.01,
			epochs = 32, device = "cpu", verbose = TRUE)

res<- getConnectionWeight(dnn0, thr=NULL, verbose=TRUE)
table(E(res$dag)$color)
}
# }

Run the code above in your browser using DataLab