Learn R Programming

daltoolboxdp (version 1.2.737)

autoenc_ed: Autoencoder - Encode-Decode

Description

Creates a deep learning autoencoder that encodes and decodes sequences of observations. Wraps a PyTorch implementation via reticulate.

Usage

autoenc_ed(
  input_size,
  encoding_size,
  batch_size = 32,
  num_epochs = 1000,
  learning_rate = 0.001
)

Value

A autoenc_ed object.

Arguments

input_size

Integer. Number of input features per observation.

encoding_size

Integer. Size of the latent (bottleneck) representation.

batch_size

Integer. Mini-batch size used during training. Default is 32.

num_epochs

Integer. Maximum number of training epochs. Default is 1000.

learning_rate

Numeric. Optimizer learning rate. Default is 0.001.

Details

This variant both compresses inputs into a latent representation and reconstructs them back to input space, allowing the reconstruction error to be used as a quality metric or for anomaly detection.

References

Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the Dimensionality of Data with Neural Networks. Paszke, A., et al. (2019). PyTorch: An Imperative Style, High-Performance Deep Learning Library.

Examples

Run this code
if (FALSE) {
# Requirements: Python with torch installed and reticulate configured.

# 1) Create sample data (50 x 20)
X <- matrix(rnorm(1000), nrow = 50, ncol = 20)

# 2) Fit encode-decode autoencoder (5-D bottleneck)
ae <- autoenc_ed(input_size = 20, encoding_size = 5, num_epochs = 50)
ae <- daltoolbox::fit(ae, X)

# 3) Reconstruct inputs and inspect reconstruction error
X_hat <- daltoolbox::transform(ae, X)  # same dimensions as X
mean((X - X_hat)^2)                    # simple MSE across all entries
}

# More examples:
# https://github.com/cefet-rj-dal/daltoolbox/blob/main/autoencoder/autoenc_ed.md

Run the code above in your browser using DataLab