Utility function to build an MLP with a choice of activation function and weight initialization with optional dropout and batch normalization.
build_pytorch_net(
n_in,
n_out,
nodes = c(32, 32),
activation = "relu",
act_pars = list(),
dropout = 0.1,
bias = TRUE,
batch_norm = TRUE,
batch_pars = list(eps = 1e-05, momentum = 0.1, affine = TRUE),
init = "uniform",
init_pars = list()
)
No return value.
(integer(1))
Number of input features.
(integer(1))
Number of targets.
(numeric())
Hidden nodes in network, each element in vector represents number
of hidden nodes in respective layer.
(character(1)|list())
Activation function, can either be a single
character and the same function is used in all layers, or a list of length length(nodes)
. See
get_pycox_activation for options.
(list())
Passed to get_pycox_activation.
(numeric())
Optional dropout layer, if NULL
then no dropout layer added
otherwise either a single numeric which will be added to all layers or a vector of differing
drop-out amounts.
(logical(1))
If TRUE
(default) then a bias parameter is added to all linear
layers.
(logical(1))
If TRUE
(default) then batch normalisation is applied
to all layers.
(list())
Parameters for batch normalisation, see
reticulate::py_help(torch$nn$BatchNorm1d)
.
(character(1))
Weight initialization method. See
get_pycox_init for options.
(list())
Passed to get_pycox_init.
This function is a helper for R users with less Python experience. Currently it is limited to simple MLPs. More advanced networks will require manual creation with reticulate.