This layer implements the Bayesian variational inference analogue to
a dense layer by assuming the kernel and/or the bias are drawn
from distributions.
layer_dense_reparameterization(
object,
units,
activation = NULL,
activity_regularizer = NULL,
trainable = TRUE,
kernel_posterior_fn = tfp$layers$util$default_mean_field_normal_fn(),
kernel_posterior_tensor_fn = function(d) d %>% tfd_sample(),
kernel_prior_fn = tfp$layers$util$default_multivariate_normal_fn,
kernel_divergence_fn = function(q, p, ignore) tfd_kl_divergence(q, p),
bias_posterior_fn = tfp$layers$util$default_mean_field_normal_fn(is_singular = TRUE),
bias_posterior_tensor_fn = function(d) d %>% tfd_sample(),
bias_prior_fn = NULL,
bias_divergence_fn = function(q, p, ignore) tfd_kl_divergence(q, p),
...
)Model or layer object
integer dimensionality of the output space
Activation function. Set it to None to maintain a linear activation.
Regularizer function for the output.
Whether the layer weights will be updated during training.
Function which creates tfd$Distribution instance representing the surrogate
posterior of the kernel parameter. Default value: default_mean_field_normal_fn().
Function which takes a tfd$Distribution instance and returns a representative
value. Default value: function(d) d %>% tfd_sample().
Function which creates tfd$Distribution instance. See default_mean_field_normal_fn docstring for required
parameter signature. Default value: tfd_normal(loc = 0, scale = 1).
Function which takes the surrogate posterior distribution, prior distribution and random variate
sample(s) from the surrogate posterior and computes or approximates the KL divergence. The
distributions are tfd$Distribution-like instances and the sample is a Tensor.
Function which creates a tfd$Distribution instance representing the surrogate
posterior of the bias parameter. Default value: default_mean_field_normal_fn(is_singular = TRUE) (which creates an
instance of tfd_deterministic).
Function which takes a tfd$Distribution instance and returns a representative
value. Default value: function(d) d %>% tfd_sample().
Function which creates tfd instance. See default_mean_field_normal_fn docstring for required parameter
signature. Default value: NULL (no prior, no variational inference)
Function which takes the surrogate posterior distribution, prior distribution and random variate sample(s)
from the surrogate posterior and computes or approximates the KL divergence. The
distributions are tfd$Distribution-like instances and the sample is a Tensor.
Additional keyword arguments passed to the keras::layer_dense constructed by this layer.
a Keras layer
By default, the layer implements a stochastic forward pass via sampling from the kernel and bias posteriors,
kernel, bias ~ posterior outputs = activation(matmul(inputs, kernel) + bias)
It uses the reparameterization estimator (Kingma and Welling, 2014)
which performs a Monte Carlo approximation of the distribution integrating
over the kernel and bias.
The arguments permit separate specification of the surrogate posterior
(q(W|x)), prior (p(W)), and divergence for both the kernel and bias
distributions.
Upon being built, this layer adds losses (accessible via the losses
property) representing the divergences of kernel and/or bias surrogate
posteriors and their respective priors. When doing minibatch stochastic
optimization, make sure to scale this loss such that it is applied just once
per epoch (e.g. if kl is the sum of losses for each element of the batch,
you should pass kl / num_examples_per_epoch to your optimizer).
You can access the kernel and/or bias posterior and prior distributions
after the layer is built via the kernel_posterior, kernel_prior,
bias_posterior and bias_prior properties.
Other layers:
layer_autoregressive(),
layer_conv_1d_flipout(),
layer_conv_1d_reparameterization(),
layer_conv_2d_flipout(),
layer_conv_2d_reparameterization(),
layer_conv_3d_flipout(),
layer_conv_3d_reparameterization(),
layer_dense_flipout(),
layer_dense_local_reparameterization(),
layer_dense_variational(),
layer_variable()