The 'dnnFit' function takes the input data, the target values, the network architecture, and the loss function as arguments, and returns a trained model that minimizes the loss function. The function also supports various options for regularization and optimization of the model.
See dNNmodel
for details on how to specify a deep learning model.
Parameters in dnnControl
will be used to control the model fit process. The loss function can be specified as dnnControl(loss = "lossFunction"). Currently, the following loss functions are supported:
'mse': Mean square error loss = 0.5*sum(dy^2)
'cox': Cox partial likelihood loss = -sum(delta*(yhat - log(S0)))
'bin': Cross-entropy = -sum(y*log(p) + (1-y)*log(1-p))
'log': Log linear cost = -sum(y*log(lambda)-lambda)
'mae': Mean absolute error loss = sum(abs(dy))
Additional loss functions will be added to the library in the future.
{ dnnFit2 } is a C++ version of dnnFit, which runs about 20% faster, however, only loss = 'mse' and 'cox' are currently supported.
When the variance for covariance matrix X is too large, please use xbar = scale(x) to standardize X.