FCNN4R (version 0.6.2)

mlp_teach_bp: Backpropagation (batch) teaching

Description

Backpropagation (a teaching algorithm) is a simple steepest descent algorithm for MSE minimisation, in which weights are updated according to (scaled) gradient of MSE.

Usage

mlp_teach_bp(net, input, output, tol_level, max_epochs, learn_rate = 0.7, l2reg = 0, report_freq = 0)

Arguments

net
an object of mlp_net class
input
numeric matrix, each row corresponds to one input vector, the number of columns must be equal to the number of neurons in the network input layer
output
numeric matrix with rows corresponding to expected outputs, the number of columns must be equal to the number of neurons in the network output layer, the number of rows must be equal to the number of input rows
tol_level
numeric value, error (MSE) tolerance level
max_epochs
integer value, maximal number of epochs (iterations)
learn_rate
numeric value, learning rate in the backpropagation algorithm (default 0.7)
l2reg
numeric value, L2 regularization parameter (default 0)
report_freq
integer value, progress report frequency, if set to 0 no information is printed on the console (this is the default)

Value

Two-element list, the first field (net) contains the trained network, the second (mse) - the learning history (MSE in consecutive epochs).

References

A.E. Bryson and Y.C. Ho. Applied optimal control: optimization, estimation, and control. Blaisdell book in the pure and applied sciences. Blaisdell Pub. Co., 1969.

David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, October 1986.