Fit a censored quantile regression neural network model for the
tau
-quantile by minimizing a cost function based on smooth
Huber-norm approximations to the tilted absolute value and ramp functions.
Left censoring can be turned on by setting lower
to a value
greater than -Inf
. A simplified form of the finite smoothing
algorithm, in which the nlm
optimization algorithm
is run with values of the eps
approximation tolerance progressively
reduced in magnitude over the sequence eps.seq
, is used to set the
QRNN weights and biases. Local minima of the cost function can be
avoided by setting n.trials
, which controls the number of
repeated runs from different starting weights and biases, to a value
greater than one.
(Note: if eps.seq
is set to a single, sufficiently large value and tau
is set to 0.5
, then the result will be a standard least squares
regression model. The same value of eps.seq
and other values
of tau
leads to expectile regression.)
The hidden layer transfer function Th
and its derivative
Th.prime
should be set to sigmoid
,
elu
, or softplus
and
sigmoid.prime
, elu.prime
, or softplus.prime
for a nonlinear model and to linear
and
linear.prime
for a linear model.
If invoked, the monotone
argument enforces increasing behaviour
between specified columns of x
and model outputs. This holds if
Th
and To
are monotone increasing functions. In this case,
the exp
function is applied to the relevant weights following
initialization and during optimization; manual adjustment of
init.weights
or qrnn.initialize
may be needed due to
differences in scaling of the constrained and unconstrained weights.
Decreasing behaviour can be forced by transforming the relevant
covariates, e.g., by reversing sign.
The additive
argument sets relevant input-hidden layer weights
to zero, resulting in a purely additive model. Interactions between covariates
are thus suppressed, leading to a compromise in flexibility between
linear quantile regression and the quantile regression neural network.
Borrowing strength by using a composite model for multiple regression quantiles
is also possible (see composite.stack
). Applying the monotone
constraint in combination with the composite model allows
one to simultaneously estimate multiple non-crossing quantiles;
the resulting monotone composite QRNN (MCQRNN) is demonstrated in
mcqrnn
.
In the linear case, model complexity does not depend on the number
of hidden nodes; the value of n.hidden
is ignored and is instead
set to one internally. In the nonlinear case, n.hidden
controls the overall complexity of the model. As an added means of
avoiding overfitting, weight penalty regularization for the magnitude
of the input-hidden layer weights (excluding biases) can be applied
by setting penalty
to a nonzero value. (For the linear model,
this penalizes both input-hidden and hidden-output layer weights,
leading to a quantile ridge regression model. In this case, kernel
quantile ridge regression can be performed with the aid of the
qrnn.rbf
function.) Finally, if the bag
argument
is set to TRUE
, models are trained on bootstrapped x
and
y
sample pairs; bootstrap aggregation (bagging) can be turned
on by setting n.ensemble
to a value greater than one. Averaging
over an ensemble of bagged models will also tend to alleviate
overfitting.
The gam.style
function can be used to plot modified
generalized additive model effects plots, which are useful for visualizing
the modelled covariate-response relationships.
Note: values of x
and y
need not be standardized or
rescaled by the user. All variables are automatically scaled to zero
mean and unit standard deviation prior to fitting and parameters are
automatically rescaled by qrnn.predict
. Values of
eps.seq
are relative to the residuals in standard deviation
units.