This function trains a DArch
classifier network with the conjugate
gradient method.
minimizeClassifier(darch, trainData, targetData, length, switchLayers)
A instance of the class DArch
.
The training data matrix
The labels for the training data
Numbers of line search
Indicates when to train the full network instead of only the upper two layers
The trained DArch
object.
This function is build on the basis of the code from G. Hinton et. al.
(http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html - last visit
06.06.2013) for the fine tuning of deep belief nets. The original code is
located in the files 'backpropclassify.m', 'CG_MNIST.m' and
'CG_CLASSIFY_INIT.m'.
It implements the fine tuning for a classification net with backpropagation
using a direct translation of the minimize
function from C.
Rassmussen (available at http://www.gatsby.ucl.ac.uk/~edward/code/minimize/
- last visit 06.06.2013) to R.
The parameter switchLayers
is for the switch between two training
type. Like in the original code, the top two layers can be trained alone
until epoch
is equal to epochSwitch
. Afterwards the entire
network will be trained.