mlp_mse(net, input, output)
mlp_grad(net, input, output)
mlp_gradi(net, input, output, i)
mlp_gradij(net, input, i)
mlp_jacob(net, input, i)mlp_net classmlp_mse returns mean squared error (numeric value).mlp_grad returns two-element lists with the first
field (grad) containing numeric vector with gradient and the second
(mse) - the mean squared error.mlp_gradi returns numeric vector with gradient.mlp_gradij returns numeric matrix with gradients of outputs in
consecutive columns.mlp_jacob returns numeric matrix with derivatives of outputs in
consecutive columns.
mlp_mse returns the mean squared error (MSE). MSE is understood
as half of the squared error averaged over all outputs and data records.mlp_grad computes the gradient of MSE w.r.t. network weights.
This function is useful when implementing batch teaching algorithms.
mlp_gradi computes the gradient of MSE w.r.t. network weights at the ith
data record. This is normalised by the number of outputs only,
the average over all rows (all i) returns the same as grad(input, output).
This function is useful for implementing on-line teaching algorithms.
mlp_gradij computes gradients of network outputs,
i.e the derivatives of outputs w.r.t. active weights, at given data row.
The derivatives of outputs are placed in subsequent columns of the returned
matrix. Scaled by the output errors and averaged they give the same
as gradi(input, output, i). This function is useful in implementing
teaching algorithms using second order corrections and Optimal Brain Surgeon
pruning algorithm.
mlp_jacob computes the Jacobian of network outputs, i.e the derivatives
of outputs w.r.t. inputs, at given data row.
The derivatives of outputs are placed in subsequent columns of the returned
matrix.