layer_autoregressive
.Following Papamakarios et al. (2017), given an autoregressive model \(p(x)\) with conditional distributions in the location-scale family, we can construct a normalizing flow for \(p(x)\).
layer_autoregressive_transform(object, made, ...)
Model or layer object
A Made
layer, which must output two parameters for each input.
Additional parameters passed to Keras Layer.
a Keras layer
Specifically, suppose made is a [layer_autoregressive()]
-- a layer implementing
a Masked Autoencoder for Distribution Estimation (MADE) -- that computes location
and log-scale parameters \(made(x)[i]\) for each input \(x[i]\). Then we can represent
the autoregressive model \(p(x)\) as \(x = f(u)\) where \(u\) is drawn
from from some base distribution and where \(f\) is an invertible and
differentiable function (i.e., a Bijector) and \(f^{-1}(x)\) is defined by:
library(tensorflow) library(zeallot) f_inverse <- function(x) { c(shift, log_scale) %<-% tf$unstack(made(x), 2, axis = -1L) (x - shift) * tf$math$exp(-log_scale) }
Given a layer_autoregressive()
made, a layer_autoregressive_transform()
transforms an input tfd_*
\(p(u)\) to an output tfd_*
\(p(x)\) where
\(x = f(u)\).