minimizeClassifier: Conjugate gradient for a classification network
Description
This function trains a DArch
classifier
network with the conjugate gradient method.Usage
minimizeClassifier(darch,trainData,targetData,epoch,length,switchLayers)
Arguments
darch
A instance of the class
DArch
. trainData
The training data matrix
targetData
The labels for the training data
epoch
The actual epoch of the training
length
Numbers of line search
switchLayers
Indicates when to train the full
network instead of only the upper two layers
Value
- The trained
DArch
object.
Details
This function is build on the basis of the code from G.
Hinton et. al.
(http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html
- last visit 06.06.2013) for the fine tuning of deep belief
nets. The original code is located in the files
'backpropclassify.m', 'CG_MNIST.m' and
'CG_CLASSIFY_INIT.m'. It implements the fine tuning for a
classification net with backpropagation using a direct
translation of the minimize
function from C.
Rassmussen (available at
http://www.gatsby.ucl.ac.uk/~edward/code/minimize/ - last
visit 06.06.2013) to R. The parameter switchLayers
is for the switch between two training type. Like in the
original code, the top two layers can be trained alone
until epoch
is equal to epochSwitch
.
Afterwards the entire network will be trained.