Learn R Programming

LBBNN (version 0.1.2)

coef.LBBNN_Net: Get model coefficients (local explanations) of an LBBNN_Net object

Description

Given an input sample x_1,... x_j (with j the number of variables), the local explanation is found by considering active paths. If relu activation functions are assumed, each path is a piecewise linear function, so the contribution for x_j is just the sum of the weights associated with the paths connecting x_j to the output. The contributions are found by taking the gradient wrt x.

Usage

# S3 method for LBBNN_Net
coef(
  object,
  dataset,
  inds = NULL,
  output_neuron = 1,
  num_data = 1,
  num_samples = 10,
  ...
)

Value

A data frame with rows corresponding to input variables and the following columns:

  • lower: lower bound of the 95% confidence interval

  • mean: mean contribution of the variable

  • upper: upper bound of the 95% confidence interval

Arguments

object

an object of class LBBNN_Net.

dataset

Either a torch::dataloader object, or a torch::torch_tensor object. The former is assumed to be the same torch::dataloader used for training or testing. The latter can be any user-defined data.

inds

Optional integer vector of row indices in the dataset to compute explanations for.

output_neuron

integer, which output neuron to explain (default = 1).

num_data

integer, if no indices are chosen, the first num_data of dataset are automatically used for explanations.

num_samples

integer, how many samples to use for model averaging when sampling the weights in the active paths.

...

further arguments passed to or from other methods.

Details

  • If num_data = 1, confidence intervals are computed using model averaging over num_samples weight samples.

  • If num_data > 1, confidence intervals are computed across the mean explanations for each sample.

  • The output is a data frame with row names as input variables (x0, x1, x2, ...) and columns giving mean and 95% confidence intervals for each variable.

Examples

Run this code
# \donttest{ 
x<-torch::torch_randn(3,2) 
b <- torch::torch_rand(2)
y <- torch::torch_matmul(x,b)
train_data <- torch::tensor_dataset(x,y)
train_loader <- torch::dataloader(train_data,batch_size = 3,shuffle=FALSE)
problem<-'regression'
sizes <- c(2,1,1) 
inclusion_priors <-c(0.9,0.2) 
inclusion_inits <- matrix(rep(c(-10,10),2),nrow = 2,ncol = 2)
stds <- c(1.0,1.0)
model <- LBBNN_Net(problem,sizes,inclusion_priors,stds,inclusion_inits,flow = FALSE,
input_skip = TRUE)
train_LBBNN(epochs = 1,LBBNN = model, lr = 0.01,train_dl = train_loader)
coef(model,dataset = x, num_data = 1)
# }

Run the code above in your browser using DataLab