Learn R Programming

LBBNN (version 0.1.2)

LBBNN_Net: Feed-forward Latent Binary Bayesian Neural Network (LBBNN)

Description

Each layer is defined by LBBNN_Linear. For example, sizes = c(20, 200, 200, 5) generates a network with:

  • 20 input features,

  • two hidden layers of 200 neurons each,

  • an output layer with 5 neurons.

Usage

LBBNN_Net(
  problem_type,
  sizes,
  prior,
  std,
  inclusion_inits,
  input_skip = FALSE,
  flow = FALSE,
  num_transforms = 2,
  dims = c(200, 200),
  device = "cpu",
  raw_output = FALSE,
  custom_act = NULL,
  link = NULL,
  nll = NULL,
  bias_inclusion_prob = FALSE
)

Value

A torch::nn_module object representing the LBBNN. It includes the following methods:

  • forward(x, MPM = FALSE): Performs a forward pass through the whole network.

  • kl_div(): Returns the KL divergence of the network.

  • density(): Returns the density of the whole network, i.e. the proportion of weights with inclusion probabilities greater than 0.5.

  • compute_paths(): Computes active paths through the network without input-skip.

  • compute_paths_input_skip(): Computes active paths with input-skip enabled.

  • density_active_path(): Returns network density after removing inactive paths.

Arguments

problem_type

character, one of: 'binary classification', 'multiclass classification', 'regression', or 'custom'.

sizes

Integer vector specifying the layer sizes of the network. The first element is the input size, the last is the output size, and the intermediate integers represent hidden layers.

prior

numeric vector of prior inclusion probabilities for each weight matrix. length must be length(sizes) - 1.

std

numeric vector of prior standard deviation for each weight matrix. length must be length(sizes) - 1.

inclusion_inits

numeric matrix of shape (2, number of weight matrices) specifying the lower and upper bounds for initializations of the inclusion parameters.

input_skip

logical, whether to include input_skip.

flow

logical, whether to use normalizing flows.

num_transforms

integer, how many transformations to use in the flow.

dims

numeric vector, hidden dimension for the neural network in the RNVP transform.

device

the device to be trained on. Can be 'cpu', 'gpu' or 'mps'. Default is cpu.

raw_output

logical, whether the network skips the last sigmoid/softmax layer to compute local explanations.

custom_act

Allows the user to submit their own customized activation function.

link

User can define their own link function (not implemented yet).

nll

User can define their own likelihood function (not implemented yet).

bias_inclusion_prob

logical, determines whether the bias should be as associated with inclusion probabilities.

Examples

Run this code
# \donttest{
layers <- c(10,2,5) 
alpha <- c(0.3,0.9)   
stds <- c(1.0,1.0)   
inclusion_inits <- matrix(rep(c(-10,10),2),nrow = 2,ncol = 2)
prob <- 'multiclass classification'
net <- LBBNN_Net(problem_type = prob, sizes = layers, prior = alpha,std = stds
,inclusion_inits = inclusion_inits,input_skip = FALSE,flow = FALSE,device = 'cpu')
x <- torch::torch_rand(20,10,requires_grad = FALSE)
output <- net(x) 
net$kl_div()$item() 
net$density() 
# }

Run the code above in your browser using DataLab