Learn R Programming

autotab (version 0.1.2)

encoder_latent: Rebuild the encoder graph to export z_mean and z_log_var

Description

Constructs the encoder computation graph (matching your original encoder_info) so that weights extracted by Encoder_weights() can be applied and the encoder to produce z_mean and z_log_var.

Usage

encoder_latent(
  encoder_input,
  encoder_info,
  latent_dim,
  Lip_en,
  power_iterations
)

Value

A Keras model whose outputs are list(z_mean, z_log_var).

Arguments

encoder_input

Data frame or matrix of the preprocessed variables (used for shape only).

encoder_info

List defining encoder architecture.

latent_dim

Integer. Latent dimension.

Lip_en

Integer (0/1). Whether spectral normalization was used in the encoder.

power_iterations

Integer. Power iterations for spectral normalization (if used).

Details

  • Spectral normalization is sourced from TensorFlow Addons via get_tfa().

  • encoder_input provides shape; the data are not consumed at build time.

  • Apply weights with set_weights() using the output of Encoder_weights().

See Also

Encoder_weights(), Latent_sample(), Decoder_weights()

Examples

Run this code
encoder_info <- list(
  list("dense", 100, "relu"),
  list("dense",  80, "relu")
)
# \donttest{
if (reticulate::py_module_available("tensorflow") &&
    exists("training")) {
weights_encoder <- Encoder_weights(
  encoder_layers = 2,
  trained_model  = training$trained_model,  #where training = VAE_train(...)
  lip_enc        = 0,
  pi_enc         = 0,
  BNenc_layers   = 0,
  learn_BN       = 0
)

latent_encoder <- encoder_latent(
  encoder_input    = data,
  encoder_info     = encoder_info,
  latent_dim       = 5,
  Lip_en           = 0,
  power_iterations = 0
)
latent_encoder %>% keras::set_weights(weights_encoder)
}
# }

Run the code above in your browser using DataLab