Learn R Programming

autotab (version 0.1.2)

decoder_model: Builds the decoder graph for an AutoTab VAE

Description

Reconstructs the decoder computational graph used during training. This is used internally by VAE_train() and externally when you want to load the trained decoder weights and generate new samples by sampling the latent space.

Usage

decoder_model(
  decoder_input,
  decoder_info,
  latent_dim,
  feat_dist,
  lip_dec,
  pi_dec,
  max_std = 10,
  min_val = 0.001,
  temperature = 0.5
)

Value

A compiled Keras model representing the decoder computational graph. You can load trained decoder weights with Decoder_weights() + set_weights(), then call predict(decoder, Z) where Z is an n x latent_dim matrix (typically a sample from your latent space).

Arguments

decoder_input

Ignored; pass NULL. No input is needed when building the compitational graph.

decoder_info

List defining the decoder architecture, e.g. list(list("dense", 80, "relu"), list("dropout", 0.1), list("dense", 100, "relu")). Each dense entry is list("dense", units, activation). Each dropout entry is list("dropout", rate). Optional elements: [[4]] L2 flag (0/1), [[5]] L2 value, [[6]] BN flag (FALSE/TRUE), [[7]] BN momentum, [[8]] BN scale/center (TRUE/FALSE).

latent_dim

Integer. Latent dimension used during training.

feat_dist

Data frame with columns column_name, distribution, num_params (created by extracting_distribution() and set via set_feat_dist()).

lip_dec

0/1 (logical). Use spectral normalization on dense hidden layers.

pi_dec

Integer. Power-iteration count for spectral normalization.

max_std

Numeric. Upper bound for Gaussian SD heads (default 10.0).

min_val

Numeric. Lower bound (epsilon) for Gaussian SD heads (default 1e-3).

temperature

Numeric. Gumbel–Softmax temperature for categorical heads (default 0.5).

Details

The final output layer of an AutoTab decoder slices outputs by feature distribution in feat_dist: Gaussian heads output mean/SD (with min_val/max_std constraints), Bernoulli heads output logits passed through sigmoid to extract probabilities, and Categorical heads use Gumbel–Softmax with the given temperature.

If lip_dec = 1, dense hidden layers are wrapped with #' spectral normalization using pi_dec power iterations.

See Also

VAE_train(), Decoder_weights(), encoder_latent(), Latent_sample(), extracting_distribution()

Examples

Run this code
# \donttest{
if (reticulate::py_module_available("tensorflow") &&
    exists("training") &&
    exists("feat_dist")) {

  # Assume you already have feat_dist set via set_feat_dist(feat_dist)
  decoder_info <- list(
    list("dense", 80, "relu"),
    list("dense", 100, "relu")
  )

  # Rebuild and apply decoder
  weights_decoder <- Decoder_weights(
    encoder_layers = 2,
    trained_model  = training$trained_model,
    lip_enc        = 0,
    pi_enc         = 0,
    prior_learn    = "fixed",
    BNenc_layers   = 0,
    learn_BN       = 0
  )

  decoder <- decoder_model(
    decoder_input = NULL,
    decoder_info  = decoder_info,
    latent_dim    = 5,
    feat_dist     = feat_dist,
    lip_dec       = 0,
    pi_dec        = 0
  )

  decoder %>% keras::set_weights(weights_decoder)
}
# }

Run the code above in your browser using DataLab