Learn R Programming

LBBNN (version 0.1.2)

LBBNN_Linear: Class to generate an LBBNN feed forward layer

Description

This module implements a fully connected LBBNN layer. It supports:

  • Prior inclusion probabilities for weights and biases in each layer.

  • Standard deviation priors for weights and biases in each layer.

  • Optional normalizing flows (RNVP) for a more flexible variational posterior.

  • Forward pass using either the full model or the Median Probability Model (MPM).

  • Computation of the KL-divergence.

Usage

LBBNN_Linear(
  in_features,
  out_features,
  prior_inclusion,
  standard_prior,
  density_init,
  flow = FALSE,
  num_transforms = 2,
  hidden_dims = c(200, 200),
  device = "cpu",
  bias_inclusion_prob = FALSE,
  conv_net = FALSE
)

Value

A torch::nn_module object representing a fully connected LBBNN layer. The module has the following methods:

  • forward(input, MPM = FALSE): Computes activation (using the LRT at training time) of a batch of inputs.

  • kl_div(): Computes the KL-divergence.

  • sample_z(): Samples from the flow if flow = TRUE, in addition returns the log-determinant of the Jacobian of the transformation.

Arguments

in_features

integer, number of input neurons.

out_features

integer, number of output neurons.

prior_inclusion

numeric scalar, prior inclusion probability for each weight and bias in the layer.

standard_prior

numeric scalar, prior standard deviation for weights and biases in each layer.

density_init

A numeric of size 2, used to initialize the inclusion parameters, one for each layer.

flow

logical, whether to use normalizing flows

num_transforms

integer, number of transformations for flow. Default is 2.

hidden_dims

numeric vector, dimension of the hidden layer(s) in the neural networks of the RNVP transform.

device

The device to be used. Default is CPU.

bias_inclusion_prob

logical, determines whether the bias should be as associated with inclusion probabilities.

conv_net

logical, whether the layer is to be used in a convolutional net.

Examples

Run this code
# \donttest{
l1 <- LBBNN_Linear(in_features = 10,out_features = 5,prior_inclusion = 0.25,
standard_prior = 1,density_init = c(0,1),flow = FALSE)
x <- torch::torch_rand(20,10,requires_grad = FALSE)
output <- l1(x,MPM = FALSE) #the forward pass, output has shape (20,5)
print(l1$kl_div()$item()) #compute KL-divergence after the forward pass
# }

Run the code above in your browser using DataLab