Works by computing the gradient wrt to input, given we have relu activation functions.
get_local_explanations_gradient(
model,
input_data,
num_samples = 1,
magnitude = TRUE,
include_potential_contribution = FALSE,
device = "cpu"
)A list with the following elements:
A torch::tensor of shape (num_samples, p, num_classes).
integer, the number of input features.
A torch::tensor of shape (num_samples, num_classes).
A LBBNN_Net with input-skip
The data to be explained (one sample).
integer, how many samples to use to produce credible intervals.
If TRUE, only return explanations. If FALSE, multiply by input values.
IF TRUE, If covariate=0, we assume that the contribution is negative (good/bad that it is not included) if FALSE, just removes zero covariates.
character, the device to be trained on. Default is 'cpu', can be 'mps' or 'gpu'.