A function used to create a control-object for the rwnn function.
control_rwnn(
n_hidden = NULL,
n_features = NULL,
lnorm = NULL,
bias_hidden = TRUE,
bias_output = TRUE,
activation = NULL,
combine_input = FALSE,
combine_hidden = TRUE,
include_data = TRUE,
include_estimate = TRUE,
rng = runif,
rng_pars = list(min = -1, max = 1)
)
A list of control variables.
A vector of integers designating the number of neurons in each of the hidden layers (the length of the list is taken as the number of hidden layers).
The number of randomly chosen features in the RWNN model. Note: This is meant for use in bag_rwnn, and it is not recommended outside of that function.
A string indicating the type of regularisation used when estimating the weights in the output layer, "l1"
or "l2"
(default).
A vector of TRUE/FALSE values. The vector should have length 1, or be equal to the number of hidden layers.
TRUE/FALSE: Should a bias be added to the output layer?
A vector of strings corresponding to activation functions (see details). The vector should have length 1, or be equal to the number of hidden layers.
TRUE/FALSE: Should the input be included to predict the output?
TRUE/FALSE: Should all hidden layers be combined to predict the output?
TRUE/FALSE: Should the original data be included in the returned object? Note: this should almost always be set to 'TRUE
', but using 'FALSE
' is more memory efficient in ERWNN-object's.
TRUE/FALSE: Should the rwnn
-function estimate the output parameters? Note: this should almost always be set to 'TRUE
', but using 'FALSE
'is more memory efficient in ERWNN-object's.
A string indicating the sampling distribution used for generating the weights of the hidden layer (defaults to runif
).
A list of parameters passed to the rng
function (defaults to list(min = -1, max = 1)
).
The possible activation functions supplied to 'activation
' are:
"identity"
$$f(x) = x$$
"bentidentity"
$$f(x) = \frac{\sqrt{x^2 + 1} - 1}{2} + x$$
"sigmoid"
$$f(x) = \frac{1}{1 + \exp(-x)}$$
"tanh"
$$f(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)}$$
"relu"
$$f(x) = \max\{0, x\}$$
"silu"
(default)$$f(x) = \frac{x}{1 + \exp(-x)}$$
"softplus"
$$f(x) = \ln(1 + \exp(x))$$
"softsign"
$$f(x) = \frac{x}{1 + |x|}$$
"sqnl"
$$f(x) = -1\text{, if }x < -2\text{, }f(x) = x + \frac{x^2}{4}\text{, if }-2 \le x < 0\text{, }f(x) = x - \frac{x^2}{4}\text{, if }0 \le x \le 2\text{, and } f(x) = 2\text{, if }x > 2$$
"gaussian"
$$f(x) = \exp(-x^2)$$
"sqrbf"
$$f(x) = 1 - \frac{x^2}{2}\text{, if }|x| \le 1\text{, }f(x) = \frac{(2 - |x|)^2}{2}\text{, if }1 < |x| < 2\text{, and }f(x) = 0\text{, if }|x| \ge 2$$
The 'rng
' argument can also be set to "orthogonal"
, "torus"
, "halton"
, or "sobol"
for added stability. The "torus"
, "halton"
, and "sobol"
methods relay on the torus, halton, and sobol functions. NB: this is not recommended when creating ensembles.
Wang W., Liu X. (2017) "The selection of input weights of extreme learning machine: A sample structure preserving point of view." Neurocomputing, 261, 28-36.