darch (version 0.12.0)

rpropagation: Resilient backpropagation training for deep architectures.

Description

The function trains a deep architecture with the resilient backpropagation algorithm. It is able to use four different types of training (see details). For details of the resilient backpropagation algorithm see the references.

Usage

rpropagation(darch, trainData, targetData,
  rprop.method = getParameter(".rprop.method"),
  rprop.decFact = getParameter(".rprop.decFact"),
  rprop.incFact = getParameter(".rprop.incFact"),
  rprop.initDelta = getParameter(".rprop.initDelta"),
  rprop.minDelta = getParameter(".rprop.minDelta"),
  rprop.maxDelta = getParameter(".rprop.maxDelta"),
  nesterovMomentum = getParameter(".darch.nesterovMomentum"),
  dropout = getParameter(".darch.dropout"),
  dropConnect = getParameter(".darch.dropout.dropConnect"),
  errorFunction = getParameter(".darch.errorFunction"),
  matMult = getParameter(".matMult"), debugMode = getParameter(".debug", F,
  darch), ...)

Arguments

darch

The deep architecture to train

trainData

The training data

targetData

The expected output for the training data

rprop.method

The method for the training. Default is "iRprop+"

rprop.decFact

Decreasing factor for the training. Default is 0.6.

rprop.incFact

Increasing factor for the training Default is 1.2.

rprop.initDelta

Initialisation value for the update. Default is 0.0125.

rprop.minDelta

Lower bound for step size. Default is 0.000001

rprop.maxDelta

Upper bound for step size. Default is 50

nesterovMomentum

See darch.nesterovMomentum parameter of darch.

dropout

See darch.dropout parameter of darch.

dropConnect

See darch.dropout.dropConnect parameter of darch.

errorFunction

See darch.errorFunction parameter of darch.

matMult

Matrix multiplication function, internal parameter.

debugMode

Whether debug mode is enabled, internal parameter.

...

Further parameters.

Value

'>DArch - The trained deep architecture

Details

RPROP supports dropout and uses the weight update function as defined via the darch.weightUpdateFunction parameter of darch.

The code for the calculation of the weight change is a translation from the MATLAB code from the Rprop Optimization Toolbox implemented by R. Calandra (see References).

Copyright (c) 2011, Roberto Calandra. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. 4. If used in any scientific publications, the publication has to refer specifically to the work published on this webpage.

This software is provided by us "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for particular purpose are disclaimed. In no event shall the copyright holders or any contributor be liable for any direct, indirect, incidental, special, exemplary, or consequential damages however caused and on any theory of liability whether in contract, strict liability or tort arising in any way out of the use of this software, even

The possible training methods (parameter rprop.method) are the following (see References for details):

Rprop+: Rprop with Weight-Backtracking
Rprop-: Rprop without Weight-Backtracking
iRprop+: Improved Rprop with Weight-Backtracking
iRprop-: Improved Rprop without Weight-Backtracking

References

M. Riedmiller, H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In Proceedings of the IEEE International Conference on Neural Networks, pp 586-591. IEEE Press, 1993.

C. Igel , M. Huesken. Improving the Rprop Learning Algorithm, Proceedings of the Second International Symposium on Neural Computation, NC 2000, ICSC Academic Press, Canada/Switzerland, pp. 115-121., 2000.

Kohavi, R., A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, Proceedings of the 14th Int. Joint Conference on Artificial Intelligence 2, S. 1137-1143, Morgan Kaufmann, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1995.

See Also

darch

Other fine-tuning functions: backpropagation, minimizeAutoencoder, minimizeClassifier

Examples

Run this code
# NOT RUN {
data(iris)
model <- darch(Species ~ ., iris, darch.fineTuneFunction = "rpropagation",
 preProc.params = list(method = c("center", "scale")),
 darch.unitFunction = c("softplusUnit", "softmaxUnit"),
 rprop.method = "iRprop+", rprop.decFact = .5, rprop.incFact = 1.2,
 rprop.initDelta = 1/100, rprop.minDelta = 1/1000000, rprop.maxDelta = 50)
# }

Run the code above in your browser using DataCamp Workspace