This layer implements the Bayesian variational inference analogue to
a dense layer by assuming the kernel
and/or the bias
are drawn
from distributions.
layer_dense_flipout(
object,
units,
activation = NULL,
activity_regularizer = NULL,
trainable = TRUE,
kernel_posterior_fn = tfp$layers$util$default_mean_field_normal_fn(),
kernel_posterior_tensor_fn = function(d) d %>% tfd_sample(),
kernel_prior_fn = tfp$layers$util$default_multivariate_normal_fn,
kernel_divergence_fn = function(q, p, ignore) tfd_kl_divergence(q, p),
bias_posterior_fn = tfp$layers$util$default_mean_field_normal_fn(is_singular = TRUE),
bias_posterior_tensor_fn = function(d) d %>% tfd_sample(),
bias_prior_fn = NULL,
bias_divergence_fn = function(q, p, ignore) tfd_kl_divergence(q, p),
seed = NULL,
...
)
a Keras layer
What to compose the new Layer
instance with. Typically a
Sequential model or a Tensor (e.g., as returned by layer_input()
).
The return value depends on object
. If object
is:
missing or NULL
, the Layer
instance is returned.
a Sequential
model, the model with an additional layer is returned.
a Tensor, the output tensor from layer_instance(object)
is returned.
integer dimensionality of the output space
Activation function. Set it to None to maintain a linear activation.
Regularizer function for the output.
Whether the layer weights will be updated during training.
Function which creates tfd$Distribution
instance representing the surrogate
posterior of the kernel
parameter. Default value: default_mean_field_normal_fn()
.
Function which takes a tfd$Distribution
instance and returns a representative
value. Default value: function(d) d %>% tfd_sample()
.
Function which creates tfd$Distribution
instance. See default_mean_field_normal_fn
docstring for required
parameter signature. Default value: tfd_normal(loc = 0, scale = 1)
.
Function which takes the surrogate posterior distribution, prior distribution and random variate
sample(s) from the surrogate posterior and computes or approximates the KL divergence. The
distributions are tfd$Distribution
-like instances and the sample is a Tensor
.
Function which creates a tfd$Distribution
instance representing the surrogate
posterior of the bias
parameter. Default value: default_mean_field_normal_fn(is_singular = TRUE)
(which creates an
instance of tfd_deterministic
).
Function which takes a tfd$Distribution
instance and returns a representative
value. Default value: function(d) d %>% tfd_sample()
.
Function which creates tfd
instance. See default_mean_field_normal_fn
docstring for required parameter
signature. Default value: NULL
(no prior, no variational inference)
Function which takes the surrogate posterior distribution, prior distribution and random variate sample(s)
from the surrogate posterior and computes or approximates the KL divergence. The
distributions are tfd$Distribution
-like instances and the sample is a Tensor
.
scalar integer
which initializes the random number generator.
Default value: NULL
(i.e., use global seed).
Additional keyword arguments passed to the keras::layer_dense
constructed by this layer.
By default, the layer implements a stochastic forward pass via sampling from the kernel and bias posteriors,
kernel, bias ~ posterior
outputs = activation(matmul(inputs, kernel) + bias)
It uses the Flipout estimator (Wen et al., 2018), which performs a Monte
Carlo approximation of the distribution integrating over the kernel
and
bias
. Flipout uses roughly twice as many floating point operations as the
reparameterization estimator but has the advantage of significantly lower
variance.
The arguments permit separate specification of the surrogate posterior
(q(W|x)
), prior (p(W)
), and divergence for both the kernel
and bias
distributions.
Upon being built, this layer adds losses (accessible via the losses
property) representing the divergences of kernel
and/or bias
surrogate
posteriors and their respective priors. When doing minibatch stochastic
optimization, make sure to scale this loss such that it is applied just once
per epoch (e.g. if kl
is the sum of losses
for each element of the batch,
you should pass kl / num_examples_per_epoch
to your optimizer).
Other layers:
layer_autoregressive()
,
layer_conv_1d_flipout()
,
layer_conv_1d_reparameterization()
,
layer_conv_2d_flipout()
,
layer_conv_2d_reparameterization()
,
layer_conv_3d_flipout()
,
layer_conv_3d_reparameterization()
,
layer_dense_local_reparameterization()
,
layer_dense_reparameterization()
,
layer_dense_variational()
,
layer_variable()