ENNreg trains the ENNreg model using batch or minibatch learning procedures.
ENNreg(
X,
y,
init = NULL,
K = NULL,
batch = TRUE,
nstart = 100,
c = 1,
lambda = 0.9,
xi = 0,
rho = 0,
eps = NULL,
nu = 1e-16,
optimProto = TRUE,
verbose = TRUE,
options = list(maxiter = 1000, rel.error = 1e-04, print = 10),
opt.rmsprop = list(batch_size = 100, epsi = 0.001, rho = 0.9, delta = 1e-08, Dtmax =
100)
)An object of class "ENNreg" with the following components:
Value of the loss function.
Parameter values.
Number of prototypes.
Predictions on the training set (a list containing the prototype unit activations, the output means, variances and precisions, as well as the lower and upper expectations).
Input matrix of size n x p, where n is the number of objects and p the number of attributes.
Vector of length n containing observations of the response variable.
Initial model generated by ENNreg_init (default=NULL).
Number of prototypes (default=NULL; must be supplied if initial model is not supplied).
If TRUE (default), batch learning is used; otherwise, online learning is used.
Number of random starts of the k-means algorithm (default: 100, used only if initial model is not supplied).
Multiplicative coefficient applied to scale parameter gamma (defaut: 1, used only if initial model is not supplied)
Parameter of the loss function (default=0.9)
Regularization coefficient penalizing precision (default=0).
Regularization coefficient shrinking the solution towards a linear model (default=0).
Parameter of the loss function (if NULL, set to 0.01 times the standard deviation of y).
Parameter of the loss function to avoid a division par zero (default=1e-16).
If TRUE (default), the initial prototypes are optimized.
If TRUE (default) intermediate results are displayed.
Parameters of the optimization procedure (see details).
Parameters of the RMSprop algorithm (see details).
If batch=TRUE, function harris from package evclust is used for
optimization. Otherwise, the RMSprop minibatch learning algorithm is used. The three
parameters in list options are:
Maximum number of iterations (default: 100).
Relative error for stopping criterion (default: 1e-4).
Number of iterations between two displays (default: 10).
Additional parameters for the RMSprop, used only if batch=FALSE, are contained in
list opt.rmsprop. They are:
'
Minibatch size.
Global learning rate.
Decay rate.
Small constant to stabilize division by small numbers.
The algorithm stops when the loss has not decreased in the last Dtmax iterations.
Thierry Denoeux. An evidential neural network model for regression based on random fuzzy numbers. In "Belief functions: Theory and applications (proc. of BELIEF 2022)", pages 57-66, Springer, 2022.
Thierry Denoeux. Quantifying prediction uncertainty in regression using random fuzzy sets: the ENNreg model. IEEE Transactions on Fuzzy Systems, Vol. 31, Issue 10, pages 3690-3699, 2023.
predict.ENNreg, ENNreg_init, ENNreg_cv,
ENNreg_holdout
# Boston dataset
# \donttest{
library(MASS)
X<-as.matrix(scale(Boston[,1:13]))
y<-Boston[,14]
set.seed(220322)
n<-nrow(Boston)
ntrain<-round(0.7*n)
train <-sample(n,ntrain)
fit <- ENNreg(X[train,],y[train],K=30)
plot(y[train],fit$pred$mux,xlab="observed response",ylab="predicted response")
# }
Run the code above in your browser using DataLab