Setup of a variational autoencoder (VAE) model.
VAE_model(dim, activation = c(rep("relu", length(dim) - 2), "sigmoid"),
batch.norm = FALSE, dropout.rate = 0,
sd = 1, loss.type = c("MSE", "binary.cross", "MMD"), nGPU = 0, ...)
numeric
vector of length at least two, giving
the dimensions of the input layer (equal to the dimension of the
output layer), the hidden layer(s) (if any) and the latent layer (in
this order).
character
vector of length
length(dim) - 1
specifying the activation functions
for all hidden layers and the output layer (in this order);
note that the input layer does not have an activation function.
logical
indicating whether batch
normalization layers are to be added after each hidden layer.
numeric
value in [0,1] specifying
the fraction of input to be dropped; see the rate parameter of
layer_dropout()
. Note that only if positive, dropout
layers are added after each hidden layer.
positive numeric
value giving the standard
deviation of the normal distribution used as prior.
character
string indicating the type of
reconstruction loss. Currently available are the mean squared error
("MSE"
), binary cross entropy ("binary.cross"
)
and (kernel) maximum mean discrepancy ("MMD"
).
non-negative integer
specifying the number of GPUs
available if the GPU version of TensorFlow is installed.
If positive, a (special) multiple GPU model for data
parallelism is instantiated. Note that for multi-layer perceptrons
on a few GPUs, this model does not yet yield any scale-up computational
factor (in fact, currently very slightly negative scale-ups are likely due
to overhead costs).
additional arguments passed to loss()
.
VAE_model()
returns a list with components
model
:VAE model (a keras object inheriting from
the classes "keras.engine.training.Model"
,
"keras.engine.network.Network"
,
"keras.engine.base_layer.Layer"
and "python.builtin.object"
).
encoder
:the encoder (a keras object as
model
).
generator
:the generator (a keras object as
model
).
type
:character
string indicating
the type of model ("VAE"
).
dim
:see above.
activation
:see above.
batch.norm
:see above.
dropout.rate
:see above.
sd
:see above.
loss.type
:see above.
dim.train
:dimension of the training data (NA
unless trained).
batch.size
:batch size (NA
unless trained).
nepoch
:number of epochs (NA
unless trained).
Kingma, D. P. and Welling, M. (2014). Stochastic gradient VB and the variational auto-encoder. Second International Conference on Learning Representations (ICLR). See https://keras.rstudio.com/articles/examples/variational_autoencoder.html
# NOT RUN {
# to avoid win-builder error "Error: Installation of TensorFlow not found"
## Example model with a 5d input, 300d hidden and 4d output layer
str(VAE_model(c(5, 300, 4)))
# }
Run the code above in your browser using DataLab