RSNNS (version 0.4-12)

jordan: Create and train a Jordan network

Description

Jordan networks are partially recurrent networks and similar to Elman networks (see elman). Partially recurrent networks are useful when working with time series data. I.e., when the output of the network not only should depend on the current pattern, but also on the patterns presented before.

Usage

jordan(x, ...)

# S3 method for default jordan(x, y, size = c(5), maxit = 100, initFunc = "JE_Weights", initFuncParams = c(1, -1, 0.3, 1, 0.5), learnFunc = "JE_BP", learnFuncParams = c(0.2), updateFunc = "JE_Order", updateFuncParams = c(0), shufflePatterns = FALSE, linOut = TRUE, inputsTest = NULL, targetsTest = NULL, ...)

Arguments

x

a matrix with training inputs for the network

...

additional function parameters (currently not used)

y

the corresponding targets values

size

number of units in the hidden layer(s)

maxit

maximum of iterations to learn

initFunc

the initialization function to use

initFuncParams

the parameters for the initialization function

learnFunc

the learning function to use

learnFuncParams

the parameters for the learning function

updateFunc

the update function to use

updateFuncParams

the parameters for the update function

shufflePatterns

should the patterns be shuffled?

linOut

sets the activation function of the output units to linear or logistic

inputsTest

a matrix with inputs to test the network

targetsTest

the corresponding targets for the test input

Value

an rsnns object.

Details

Learning on Jordan networks: Backpropagation algorithms for feed-forward networks can be adapted for their use with this type of networks. In SNNS, there exist adapted versions of several backpropagation-type algorithms for Jordan and Elman networks.

Network architecture: A Jordan network can be seen as a feed-forward network with additional context units in the input layer. These context units take input from themselves (direct feedback), and from the output units. The context units save the current state of the net. In a Jordan net, the number of context units and output units has to be the same.

Initialization of Jordan and Elman nets should be done with the default init function JE_Weights, which has five parameters. The first two parameters define an interval from which the forward connections are randomly chosen. The third parameter gives the self-excitation weights of the context units. The fourth parameter gives the weights of context units between them, and the fifth parameter gives the initial activation of context units.

Learning functions are JE_BP, JE_BP_Momentum, JE_Quickprop, and JE_Rprop, which are all adapted versions of their standard-procedure counterparts. Update functions that can be used are JE_Order and JE_Special.

A detailed description of the theory and the parameters is available, as always, from the SNNS documentation and the other referenced literature.

References

Jordan, M. I. (1986), 'Serial Order: A Parallel, Distributed Processing Approach', Advances in Connectionist Theory Speech 121(ICS-8604), 471-495.

Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of T<U+00FC>bingen. http://www.ra.cs.uni-tuebingen.de/SNNS/welcome.html

Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley. (in German)

See Also

elman

Examples

Run this code
# NOT RUN {
demo(iris)
# }
# NOT RUN {
demo(laser)
# }
# NOT RUN {
demo(eight_elman)
# }
# NOT RUN {
demo(eight_elmanSnnsR)
# }
# NOT RUN {

data(snnsData)
inputs <- snnsData$laser_1000.pat[,inputColumns(snnsData$laser_1000.pat)]
outputs <- snnsData$laser_1000.pat[,outputColumns(snnsData$laser_1000.pat)]

patterns <- splitForTrainingAndTest(inputs, outputs, ratio=0.15)

modelJordan <- jordan(patterns$inputsTrain, patterns$targetsTrain, 
                       size=c(8), learnFuncParams=c(0.1), maxit=100,
                       inputsTest=patterns$inputsTest, 
                       targetsTest=patterns$targetsTest, linOut=FALSE)

names(modelJordan)

par(mfrow=c(3,3))
plotIterativeError(modelJordan)

plotRegressionError(patterns$targetsTrain, modelJordan$fitted.values)
plotRegressionError(patterns$targetsTest, modelJordan$fittedTestValues)
hist(modelJordan$fitted.values - patterns$targetsTrain, col="lightblue")

plot(inputs, type="l")
plot(inputs[1:100], type="l")
lines(outputs[1:100], col="red")
lines(modelJordan$fitted.values[1:100], col="green")
# }

Run the code above in your browser using DataCamp Workspace