Learn R Programming

⚠️There's a newer version (0.12.0) of this package.Take me there.

darch

Create deep architectures in the R programming language

Installation

When using devtools, the latest git version (identifiable by a version number ending in something greater than or equal to 9000) can be installed using

install_github("maddin79/darch")

Then, use ?darch to view its documentation or example("darch") to load some examples (these will not directly be executed, but provide example.* functions).

About

The darch package is built on the basis of the code from G. E. Hinton and R. R. Salakhutdinov (available under Matlab Code for deep belief nets : last visit: 12.11.2015).

This package is for generating neural networks with many layers (deep architectures) and train them with the method introduced by the publications "A fast learning algorithm for deep belief nets" (G. E. Hinton, S. Osindero, Y. W. Teh) and "Reducing the dimensionality of data with neural networks" (G. E. Hinton, R. R. Salakhutdinov). This method includes a pre training with the contrastive divergence method published by G.E Hinton (2002) and a fine tuning with common known training algorithms like backpropagation or conjugate gradient, as well as more recent techniques like dropout and maxout.

Copyright (C) 2013-2015 Martin Drees and contributors

References

Hinton, G. E., S. Osindero, Y. W. Teh, A fast learning algorithm for deep belief nets, Neural Computation 18(7), S. 1527-1554, DOI: 10.1162/neco.2006.18.7.1527, 2006.

Hinton, G. E., R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science 313(5786), S. 504-507, DOI: 10.1126/science.1127647, 2006.

Hinton, G. E., Training products of experts by minimizing contrastive divergence, Neural Computation 14(8), S. 1711-1800, DOI: 10.1162/089976602760128018, 2002.

Hinton, Geoffrey E. et al. (2012). "Improving neural networks by preventing coadaptation of feature detectors". In: Clinical Orthopaedics and Related Research abs/1207.0580. URL : arxiv.org.

Goodfellow, Ian J. et al. (2013). "Maxout Networks". In: Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pp. 1319–1327. URL : jmlr.org.

Drees, Martin (2013). "Implementierung und Analyse von tiefen Architekturen in R". German. Master's thesis. Fachhochschule Dortmund.

Rueckert, Johannes (2015). "Extending the Darch library for deep architectures". Project thesis. Fachhochschule Dortmund. URL: saviola.de.

Copy Link

Version

Monthly Downloads

9

Version

0.10.0

License

GPL (>= 2) | file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Martin Drees

Last Published

May 5th, 2018

Functions in darch (0.10.0)

linearUnitFunc

Calculates the linear neuron output no transfer function
createDataSet,ANY,missing,formula,missing-method

Constructor function for '>DataSet objects.
setNumVisible<-

Sets the number of visible units
binSigmoidUnit

Binary sigmoid unit function.
DataSet-class

Class for specifying datasets.
getDropoutOneMaskPerEpoch,DArch-method

Return the dropout usage
fineTuneDArch

Fine tuning function for the deep architecture
getCancel

Returns the cancel value
addLayerField

Adds a field to a layer
getHiddenBiasesInc

Returns the update value for the biases of the hidden units.
getRBMList,DArch-method

Returns a list of RBMs of the '>DArch object
getEpochs,Net-method

maxoutUnitDerivative

Maxout unit function with unit derivatives.
getLearnRateBiases

applyDropoutMask

Applies the given dropout mask to the given data row-wise.
linearUnit

Linear unit function.
getDropoutHiddenLayers

Returns the dropout rate for the hidden layers
newRBM

Constructor function for RBM object.
getFF

Returns if the weights are saved as ff objects
RBM-class

Class for restricted Boltzmann machines
darch.formula

Fit a deep neural network using a formula and a single data frame or matrix.
getLayerWeights,DArch-method

preTrainDArch,DArch-method

Pre-trains a '>DArch network
getInitialMomentum

generateDropoutMask

Dropout mask generator function.
getLearnRateWeights

Returns the learn rate of the weights.
preTrainDArch

Pre-trains a '>DArch network
getWeightInc

Returns the update value for the weights.
incrementEpochs,Net-method

Increment the number of epochs this '>Net has been trained for
getHiddenBiases

Returns the biases of the hidden units.
getDropoutMasks,DArch-method

Returns the dropout masks
newDArch

createDataSet

Create data set using data, targets, a formula, and possibly an existing data set.
getExecOutput,DArch-method

Net-class

Abstract class for neural networks.
saveRBM

Saves a RBM network
getLearnRateBiasHidden

Returns the learning rate for the hidden biases.
quadraticError

Quadratic error function
darch.default

Fit deep neural network.
getExecuteFunction,DArch-method

createDataSet,ANY,ANY,missing,DataSet-method

Create new '>DataSet by filling an existing one with new data.
getLayerFunction,DArch-method

addLayerField,DArch-method

Adds a field to a layer
getBatchSize

setWeightInc<-

Sets the update values for the weights
getDropoutInputLayer

Returns the dropout rate for the input layer
incrementEpochs

Increment the number of epochs this '>Net has been trained for
setHiddenBiases<-

darch

Fit a deep neural network.
getLayerField

setCancelMessage<-

Sets the cancel message.
getVisibleUnitStates

Returns a list with the states of the visible units.
crossEntropyError

Cross entropy error function
linearUnitDerivative

Linear unit function with unit derivatives.
getHiddenUnitStates

Returns a list with the states of the hidden units.
loadRBMFFWeights

Loads weights and biases for a RBM network from a ffData file.
setDropoutMasks<-

Set the dropout masks.
setBatchSize<-

addExecOutput,DArch-method

Adds an execution output for a DArch object
getCancelMessage

Returns the cancel message
getCancelMessage,DArch-method

Returns the cancel message
getExecOutput

getDropoutMasks

Returns the dropout masks
getMomentum

getVisibleBiasesInc

Returns the update value for the biases of the visible units.
resetExecOutput

setInitialMomentum<-,Net-method

Sets the initial momentum of the Net
getDropoutMask,DArch-method

Returns the dropout mask for the given layer
getInitialMomentum,Net-method

getGenWeightFunction

Returns the function for generating weight matrices.
setFinalMomentum<-

getLayer

setDropoutMask<-

Set the dropout mask for the given layer.
getVisibleBiases

Returns the biases of the visible units.
getLayers

getEpochs

DArch-class

Class for deep architectures
getLayerFunction

print.DArch

Print '>DArch details.
getExecOutputs,DArch-method

minimize

Minimize a differentiable multivariate function.
addLayer,DArch-method

Adds a layer to the '>DArch object
minimizeClassifier

Conjugate gradient for a classification network
getLayer,DArch-method

loadRBM

Loads a RBM network
trainRBM,RBM-method

Trains a RBM with contrastive divergence
getExecOutputs

getWeightCost

Returns the weigth cost for the training
getOutput

getFinalMomentum

getExecuteFunction

setLayers<-

Sets the layers for the network
getLayerWeights

getRBMList

Returns a list of RBMs of the '>DArch object
generateRBMs

Generates the RBMs for the pre-training.
getFineTuneFunction

addExecOutput

Adds an execution output for a DArch object
rpropagation

Resilient backpropagation training for deep architectures.
getWeights

darch.DataSet

setCancel<-

Set whether the learning shall be canceled.
getCancel,DArch-method

Returns the cancel value
setPosPhaseData<-

Sets the positive phase data for the training
validateDataSet

Validate '>DataSet
addLayer

Adds a layer to the '>DArch object
createDataSet,ANY,ANY,missing,missing-method

Create '>DataSet using data and targets.
setVisibleUnitFunction<-

Sets the unit function of the visible units
getLayerField,DArch-method

getDropoutHiddenLayers,DArch-method

Returns the dropout rate for the hidden layers
getDropoutOneMaskPerEpoch

Return the dropout usage
removeLayerField

setGenWeightFunction<-

Sets the function for generating weight matrices.
getNumVisible

getStats

Returns the list of statistics for the network
getFineTuneFunction,DArch-method

provideMNIST

Provides MNIST data set in the given folder.
setLearnRateBiasVisible<-

Sets the learnig rates of the biases for the visible units
tanSigmoidUnitDerivative

Continuous Tan-Sigmoid unit function.
setHiddenUnitFunction<-

Sets the unit function of the hidden units
setVisibleUnitStates<-

Sets the states of the visible units
runDArch

Execute the darch
resetRBM

setWeights<-

setDropoutInputLayer<-

Sets the dropout rate for the input layer.
sigmUnitFuncSwitch

Calculates the neuron output with the sigmoid function
validateDataSet,DataSet-method

Validate '>DataSet
rbmUpdate

minimizeAutoencoder

Conjugate gradient for a autoencoder network
getErrorFunction

setFF<-

Sets if the weights are saved as ff objects
mseError

Mean squared error function
setLearnRateBiasHidden<-

Sets the learning rates of the biases for the hidden units
setVisibleBiases<-

setHiddenBiasesInc<-

Sets the update value for the biases of the hidden units
softmaxUnitDerivative

Softmax unit function with unit derivatives.
setDropoutHiddenLayers<-

Sets the dropout rate for the hidden layers.
getNormalizeWeights

Returns whether weight normalization is active
sigmoidUnitDerivative

Sigmoid unit function with unit derivatives.
saveRBMFFWeights

Saves weights and biases of a RBM network into a ffData file.
getLayers,DArch-method

setDropoutOneMaskPerEpoch<-

Set dropout mask usage
setInitialMomentum<-

Sets the initial momentum of the Net
setMomentumSwitch<-

getPosPhaseData

Returns the data for the positive phase.
setNormalizeWeights<-

Set whether weight normalization should be performed
setLayer<-

Sets a layer with the given index for the network
setHiddenUnitStates<-

Sets the states of the hidden units
setErrorFunction<-

setFineTuneFunction<-

Sets the fine tuning function for the network
setNormalizeWeights<-,Net-method

Set whether weight normalization should be performed
setUpdateFunction<-

setLayerWeights<-

Sets the weights of a layer with the given index
getDropoutInputLayer,DArch-method

Returns the dropout rate for the input layer
predict.DArch

Forward-propagate data.
setDropoutOneMaskPerEpoch<-,DArch-method

Set dropout mask usage
setLearnRateBiases<-

Sets the learning rate for the biases
setVisibleBiasesInc<-

Sets the update value for the biases of the visible units
loadDArch

Loads a DArch network
setLogLevel<-

getNormalizeWeights,Net-method

Returns whether weight normalization is active
saveDArch

Saves a DArch network
tanSigmoidUnit

Continuous Tan-Sigmoid unit function.
setNumHidden<-

Sets the number of hidden units
readMNIST

Function for generating ff files of the MNIST Database
setLearnRateWeights<-

Sets the learning rate for the weights.
setWeightCost<-

Sets the weight costs for the training
generateWeights

Generates a weight matrix.
setRBMList<-

Sets the list of RBMs
sigmoidUnit

Sigmoid unit function.
backpropagation

Backpropagation learning function
setStats<-

Adds a list of statistics to the network
setLayerFunction<-

Sets the function for a layer with the given index
softmaxUnit

Softmax unit function.
getLearnRateBiases,DArch-method

getLearnRateBiasVisible

Returns the learning rate for the visible biases.
makeStartEndPoints

Makes start- and end-points for the batches.
setExecuteFunction<-

Sets the execution function for the network
setLayerField<-

Sets a field in a layer.
getMomentumSwitch

getDropoutMask

Returns the dropout mask for the given layer
getNumHidden

resetDArch

trainRBM

Trains a RBM with contrastive divergence
fineTuneDArch,DArch-method

Fine tuning function for the deep architecture
setOutput<-

sigmUnitFunc

Calculates the neuron output with the sigmoid function
setHiddenUnitFunction<-

Sets the unit function of the hidden units
setLayer<-

Sets a layer with the given index for the network
setInitialMomentum<-

setLayers<-

Sets the layers for the network
setVisibleBiasesInc<-

Sets the update value for the biases of the visible units
setDropoutMask<-

Set the dropout mask for the given layer.
setNormalizeWeights<-,Net-method

Set whether weight normalization should be performed
setDropoutInputLayer<-

Sets the dropout rate for the input layer.
setLayerFunction<-

Sets the function for a layer with the given index
setExecuteFunction<-

Sets the execution function for the network
setLearnRateBiases<-

Sets the learning rate for the biases
setLayerField<-

Sets a field in a layer.
setFF<-

Sets if the weights are saved as ff objects
setVisibleUnitFunction<-

Sets the unit function of the visible units
setCancelMessage<-

Sets the cancel message.
setDropoutOneMaskPerEpoch<-,DArch-method

Set dropout mask usage
setNumHidden<-

Sets the number of hidden units
setOutput<-

setNumVisible<-

Sets the number of visible units
setNormalizeWeights<-

Set whether weight normalization should be performed
setLayerWeights<-

Sets the weights of a layer with the given index
setHiddenBiasesInc<-

Sets the update value for the biases of the hidden units
setDropoutMasks<-

Set the dropout masks.
setDropoutHiddenLayers<-

Sets the dropout rate for the hidden layers.
setUpdateFunction<-

setLearnRateWeights<-

Sets the learning rate for the weights.
setVisibleBiases<-

setFinalMomentum<-

setHiddenUnitStates<-

Sets the states of the hidden units
setFineTuneFunction<-

Sets the fine tuning function for the network
setPosPhaseData<-

Sets the positive phase data for the training
setBatchSize<-

setLearnRateBiasHidden<-

Sets the learning rates of the biases for the hidden units
setInitialMomentum<-,Net-method

setWeights<-

setLearnRateBiasVisible<-

Sets the learnig rates of the biases for the visible units
setWeightInc<-

Sets the update values for the weights
setLogLevel<-

setVisibleUnitStates<-

Sets the states of the visible units
setDropoutOneMaskPerEpoch<-

Set dropout mask usage
setGenWeightFunction<-

Sets the function for generating weight matrices.
setMomentumSwitch<-

setCancel<-

Set whether the learning shall be canceled.
setErrorFunction<-

setWeightCost<-

Sets the weight costs for the training
setRBMList<-

Sets the list of RBMs
setHiddenBiases<-

setStats<-

Adds a list of statistics to the network