RSNNS (version 0.4-12)

elman: Create and train an Elman network

Description

Elman networks are partially recurrent networks and similar to Jordan networks (function jordan). For details, see explanations there.

Usage

elman(x, ...)

# S3 method for default elman(x, y, size = c(5), maxit = 100, initFunc = "JE_Weights", initFuncParams = c(1, -1, 0.3, 1, 0.5), learnFunc = "JE_BP", learnFuncParams = c(0.2), updateFunc = "JE_Order", updateFuncParams = c(0), shufflePatterns = FALSE, linOut = TRUE, outContext = FALSE, inputsTest = NULL, targetsTest = NULL, ...)

Arguments

x

a matrix with training inputs for the network

...

additional function parameters (currently not used)

y

the corresponding targets values

size

number of units in the hidden layer(s)

maxit

maximum of iterations to learn

initFunc

the initialization function to use

initFuncParams

the parameters for the initialization function

learnFunc

the learning function to use

learnFuncParams

the parameters for the learning function

updateFunc

the update function to use

updateFuncParams

the parameters for the update function

shufflePatterns

should the patterns be shuffled?

linOut

sets the activation function of the output units to linear or logistic

outContext

if TRUE, the context units are also output units (untested)

inputsTest

a matrix with inputs to test the network

targetsTest

the corresponding targets for the test input

Value

an rsnns object.

Details

Learning in Elman networks: Same as in Jordan networks (see jordan).

Network architecture: The difference between Elman and Jordan networks is that in an Elman network the context units get input not from the output units, but from the hidden units. Furthermore, there is no direct feedback in the context units. In an Elman net, the number of context units and hidden units has to be the same. The main advantage of Elman nets is that the number of context units is not directly determined by the output dimension (as in Jordan nets), but by the number of hidden units, which is more flexible, as it is easy to add/remove hidden units, but not output units.

A detailed description of the theory and the parameters is available, as always, from the SNNS documentation and the other referenced literature.

References

Elman, J. L. (1990), 'Finding structure in time', Cognitive Science 14(2), 179--211.

Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of T<U+00FC>bingen. http://www.ra.cs.uni-tuebingen.de/SNNS/welcome.html

Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley. (in German)

See Also

jordan

Examples

Run this code
# NOT RUN {
demo(iris)
# }
# NOT RUN {
demo(laser)
# }
# NOT RUN {
demo(eight_elman)
# }
# NOT RUN {
demo(eight_elmanSnnsR)
# }
# NOT RUN {

data(snnsData)
inputs <- snnsData$eight_016.pat[,inputColumns(snnsData$eight_016.pat)]
outputs <- snnsData$eight_016.pat[,outputColumns(snnsData$eight_016.pat)]

par(mfrow=c(1,2))

modelElman <- elman(inputs, outputs, size=8, learnFuncParams=c(0.1), maxit=1000)
modelElman
modelJordan <- jordan(inputs, outputs, size=8, learnFuncParams=c(0.1), maxit=1000)
modelJordan

plotIterativeError(modelElman)
plotIterativeError(modelJordan)

summary(modelElman)
summary(modelJordan)
# }

Run the code above in your browser using DataLab