powered by
Use gradient boosting to create ensemble random weight neural network models.
boost_rwnn( formula, data = NULL, n_hidden = c(), lambda = NULL, B = 100, epsilon = 0.1, method = NULL, type = NULL, control = list() )# S3 method for formula boost_rwnn( formula, data = NULL, n_hidden = c(), lambda = NULL, B = 100, epsilon = 0.1, method = NULL, type = NULL, control = list() )
# S3 method for formula boost_rwnn( formula, data = NULL, n_hidden = c(), lambda = NULL, B = 100, epsilon = 0.1, method = NULL, type = NULL, control = list() )
An ERWNN-object.
A formula specifying features and targets used to estimate the parameters of the output layer.
A data-set (either a data.frame or a tibble) used to estimate the parameters of the output layer.
A vector of integers designating the number of neurons in each of the hidden layers (the length of the list is taken as the number of hidden layers).
The penalisation constant(s) passed to either rwnn or ae_rwnn (see method argument).
method
The number of levels used in the boosting tree.
The learning rate.
The penalisation type passed to ae_rwnn. Set to NULL (default), "l1", or "l2". If NULL, rwnn is used as the base learner.
NULL
"l1"
"l2"
A string indicating whether this is a regression or classification problem.
A list of additional arguments passed to the control_rwnn function.
Friedman J.H. (2001) "Greedy function approximation: A gradrient boosting machine." The Annals of Statistics, 29, 1189-1232.
n_hidden <- 10 B <- 100 epsilon <- 0.1 lambda <- 0.01 m <- boost_rwnn(y ~ ., data = example_data, n_hidden = n_hidden, lambda = lambda, B = B, epsilon = epsilon)
Run the code above in your browser using DataLab