Learn R Programming

darch (version 0.10.0)

darch.default: Fit deep neural network.

Description

Fit deep neural network with optional pre-training and fine-tuning.

Usage

# S3 method for default
darch(x, y, layers = NULL, ..., xValid = NULL,
  yValid = NULL, scale = F, normalizeWeights = F, rbm.batchSize = 1,
  rbm.trainOutputLayer = T, rbm.learnRateWeights = 0.1,
  rbm.learnRateBiasVisible = 0.1, rbm.learnRateBiasHidden = 0.1,
  rbm.weightCost = 2e-04, rbm.initialMomentum = 0.5,
  rbm.finalMomentum = 0.9, rbm.momentumSwitch = 5,
  rbm.visibleUnitFunction = sigmUnitFunc,
  rbm.hiddenUnitFunction = sigmUnitFuncSwitch,
  rbm.updateFunction = rbmUpdate, rbm.errorFunction = mseError,
  rbm.genWeightFunction = generateWeights, rbm.numCD = 1,
  rbm.numEpochs = 0, darch = NULL, darch.batchSize = 1,
  darch.bootstrap = T, darch.genWeightFunc = generateWeights,
  darch.logLevel = INFO, darch.fineTuneFunction = backpropagation,
  darch.initialMomentum = 0.5, darch.finalMomentum = 0.9,
  darch.momentumSwitch = 5, darch.learnRateWeights = 0.1,
  darch.learnRateBiases = 0.1, darch.errorFunction = mseError,
  darch.dropoutInput = 0, darch.dropoutHidden = 0,
  darch.dropoutOneMaskPerEpoch = F,
  darch.layerFunctionDefault = sigmoidUnitDerivative,
  darch.layerFunctions = list(),
  darch.layerFunction.maxout.poolSize = getOption("darch.unitFunction.maxout.poolSize",
  NULL), darch.isBin = F, darch.isClass = T, darch.stopErr = -Inf,
  darch.stopClassErr = -Inf, darch.stopValidErr = -Inf,
  darch.stopValidClassErr = -Inf, darch.numEpochs = 0,
  darch.retainData = T, dataSet = NULL, dataSetValid = NULL,
  gputools = T)

Arguments

x

Input data.

y

Target data.

layers

Vector containing one integer for the number of neurons of each layer. Defaults to c(a, 10, b), where a is the number of columns in the training data and b the number of columns in the targets.

...

additional parameters

xValid

Validation input data.

yValid

Validation target data.

scale

Logical or logical vector indicating whether or which columns to scale.

normalizeWeights

Logical indicating whether to normalize weights (L2 norm = 1).

rbm.batchSize

Pre-training batch size.

rbm.trainOutputLayer

Logical indicating whether to train the output layer RBM as well (only useful for unsupervised fine-tuning).

rbm.learnRateWeights

Learn rate for the weights during pre-training.

rbm.learnRateBiasVisible

Learn rate for the weights of the visible bias.

rbm.learnRateBiasHidden

Learn rate for the weights of the hidden bias.

rbm.weightCost

Pre-training weight cost. Higher values result in lower weights.

rbm.initialMomentum

Initial momentum during pre-training.

rbm.finalMomentum

Final momentum during pre-training.

rbm.momentumSwitch

Epoch during which momentum is switched from the initial to the final value.

rbm.visibleUnitFunction

Visible unit function during pre-training.

rbm.hiddenUnitFunction

Hidden unit function during pre-training.

rbm.updateFunction

Update function during pre-training.

rbm.errorFunction

Error function during pre-training.

rbm.genWeightFunction

Function to generate the initial RBM weights.

rbm.numCD

Number of full steps for which contrastive divergence is performed.

rbm.numEpochs

Number of pre-training epochs.

darch

Existing '>DArch instance for which training is to be resumed.

darch.batchSize

Batch size, i.e. the number of training samples that are presented to the network before weight updates are performed (for both pre-training and fine-tuning).

darch.bootstrap

Logical indicating whether to use bootstrapping to create a training and validation data set from the given data.

darch.genWeightFunc

Function to generate the initial weights of the DBN.

darch.logLevel

Log level. futile.logger::INFO by default.

darch.fineTuneFunction

Fine-tuning function.

darch.initialMomentum

Initial momentum during fine-tuning.

darch.finalMomentum

Final momentum during fine-tuning.

darch.momentumSwitch

Epoch at which to switch from the intial to the final momentum value.

darch.learnRateWeights

Learn rate for the weights during fine-tuning.

darch.learnRateBiases

Learn rate for the biases during fine-tuning.

darch.errorFunction

Error function during fine-tuning.

darch.dropoutInput

Dropout rate on the network input.

darch.dropoutHidden

Dropout rate on the hidden layers.

darch.dropoutOneMaskPerEpoch

Whether to generate a new mask for each batch (FALSE, default) or for each epoch (TRUE).

darch.layerFunctionDefault

Default activation function for the DBN layers.

darch.layerFunctions

A list of activation functions, names() should be a character vector of layer numbers. Note that layer 1 signifies the layer function between layers 1 and 2, i.e. the output of layer 2. Layer 1 does not have a layer function, since the input values are used directly.

darch.layerFunction.maxout.poolSize

Pool size for maxout units, when using the maxout acitvation function.

darch.isBin

Whether network outputs are to be treated as binary values.

darch.isClass

Whether classification errors should be printed during fine-tuning.

darch.stopErr

When the value of the error function is lower than or equal to this value, training is stopped.

darch.stopClassErr

When the classification error is lower than or equal to this value, training is stopped (0..100).

darch.stopValidErr

When the value of the error function on the validation data is lower than or equal to this value, training is stopped.

darch.stopValidClassErr

When the classification error on the validation data is lower than or equal to this value, training is stopped (0..100).

darch.numEpochs

Number of epochs of fine-tuning.

darch.retainData

Logical indicating whether to store the training data in the '>DArch instance after training.

dataSet

'>DataSet instance, passed from darch.DataSet(), may be specified manually.

dataSetValid

'>DataSet instance containing validation data.

gputools

Logical indicating whether to use gputools for matrix multiplication, if available.

Value

Fitted '>DArch instance

See Also

Other darch interface functions: darch.DataSet; darch.formula; darch; predict.DArch, predict.darch; print.DArch, print.darch