Learn R Programming

autotab (version 0.1.1)

Encoder_weights: Extract encoder-only weights from a trained Keras model

Description

Pulls just the encoder weights from keras::get_weights(trained_model), skipping any parameters introduced by batch normalization (BN) or spectral normalization (SN). The split index is computed from the number of encoder layers and whether BN/SN were used.

Usage

Encoder_weights(
  encoder_layers,
  trained_model,
  lip_enc,
  pi_enc,
  BNenc_layers,
  learn_BN
)

Value

A list() of encoder weight tensors in order, suitable for set_weights().

Arguments

encoder_layers

Integer. Number of encoder layers (used to compute split index).

trained_model

Keras model. Typically training$trained_model from VAE_train().

lip_enc

Integer (0/1). Whether spectral normalization was used in the encoder.

pi_enc

Integer. Power iteration count if spectral normalization was used.

BNenc_layers

Integer. Number of encoder layers that had batch normalization.

learn_BN

Integer (0/1). Whether BN layers learned scale and center.

Details

  • The index arithmetic assumes AutoTab's standard Dense/BN/SN layout. If you substantially change layer ordering or introduce new per-layer parameters, re-check the split index.

  • All model weights can always be accessed directly using keras::get_weights(trained_model). This function is provided as a convenience tool within AutoTab to streamline encoder reconstruction but is not the only method available.

See Also

encoder_latent(), Decoder_weights(), VAE_train(), Latent_sample()

Examples

Run this code
encoder_info <- list(
  list("dense", 100, "relu"),
  list("dense",  80, "relu")
)

 # \donttest{
if (reticulate::py_module_available("tensorflow") &&
    exists("training")) {
weights_encoder <- Encoder_weights(
  encoder_layers = 2,
  trained_model  = training$trained_model, #where training = VAE_train(...)
  lip_enc        = 0,
  pi_enc         = 0,
  BNenc_layers   = 0,
  learn_BN       = 0
)
}
# }

Run the code above in your browser using DataLab