Learn R Programming

RWNN (version 0.4)

ae_rwnn: Auto-encoder pre-trained random weight neural networks

Description

Set-up and estimate weights of a random weight neural network using an auto-encoder for unsupervised pre-training of the hidden weights.

Usage

ae_rwnn(
  formula,
  data = NULL,
  n_hidden = c(),
  lambda = NULL,
  method = "l1",
  type = NULL,
  control = list()
)

# S3 method for formula ae_rwnn( formula, data = NULL, n_hidden = c(), lambda = NULL, method = "l1", type = NULL, control = list() )

Value

An RWNN-object.

Arguments

formula

A formula specifying features and targets used to estimate the parameters of the output-layer.

data

A data-set (either a data.frame or a tibble) used to estimate the parameters of the output-layer.

n_hidden

A vector of integers designating the number of neurons in each of the hidden-layers (the length of the list is taken as the number of hidden-layers).

lambda

A vector of two penalisation constants used when encoding the hidden-weights and training the output-weights, respectively.

method

The penalisation type used for the auto-encoder (either "l1" or "l2").

type

A string indicating whether this is a regression or classification problem.

control

A list of additional arguments passed to the control_rwnn function.

References

Zhang Y., Wu J., Cai Z., Du B., Yu P.S. (2019) "An unsupervised parameter learning model for RVFL neural network." Neural Networks, 112, 85-97.

Examples

Run this code
n_hidden <- c(20, 15, 10, 5)
lambda <- c(2, 0.01)

## Using L1-norm in the auto-encoder (sparse solution)
m <- ae_rwnn(y ~ ., data = example_data, n_hidden = n_hidden, lambda = lambda, method = "l1")

## Using L2-norm in the auto-encoder (dense solution)
m <- ae_rwnn(y ~ ., data = example_data, n_hidden = n_hidden, lambda = lambda, method = "l2")

Run the code above in your browser using DataLab