RSNNS (version 0.4-11)

rbfDDA: Create and train an RBF network with the DDA algorithm

Description

Create and train an RBF network with the dynamic decay adjustment (DDA) algorithm. This type of network can only be used for classification. The training typically begins with an empty network, i.e., a network only consisting of input and output units, and adds new units successively. It is a lot easier to use than normal RBF, because it only requires two quite uncritical parameters.

Usage

rbfDDA(x, ...)

# S3 method for default rbfDDA(x, y, maxit = 1, initFunc = "Randomize_Weights", initFuncParams = c(-0.3, 0.3), learnFunc = "RBF-DDA", learnFuncParams = c(0.4, 0.2, 5), updateFunc = "Topological_Order", updateFuncParams = c(0), shufflePatterns = TRUE, linOut = FALSE, ...)

Arguments

x

a matrix with training inputs for the network

...

additional function parameters (currently not used)

y

the corresponding targets values

maxit

maximum of iterations to learn

initFunc

the initialization function to use

initFuncParams

the parameters for the initialization function

learnFunc

the learning function to use

learnFuncParams

the parameters for the learning function

updateFunc

the update function to use

updateFuncParams

the parameters for the update function

shufflePatterns

should the patterns be shuffled?

linOut

sets the activation function of the output units to linear or logistic

Value

an rsnns object.

Details

The default functions do not have to be altered. The learning function RBF-DDA has three parameters: a positive threshold, and a negative threshold, that controls adding units to the network, and a parameter for display purposes in the original SNNS. This parameter has no effect in RSNNS. See p 74 of the original SNNS User Manual for details.

References

Berthold, M. R. & Diamond, J. (1995), Boosting the Performance of RBF Networks with Dynamic Decay Adjustment, in 'Advances in Neural Information Processing Systems', MIT Press, , pp. 521--528.

Hudak, M. (1993), 'RCE classifiers: theory and practice', Cybernetics and Systems 23(5), 483--515.

Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of T<U+00FC>bingen. http://www.ra.cs.uni-tuebingen.de/SNNS/

Examples

Run this code
# NOT RUN {
demo(iris)
# }
# NOT RUN {
demo(rbfDDA_spiralsSnnsR)
# }
# NOT RUN {

data(iris)
iris <- iris[sample(1:nrow(iris),length(1:nrow(iris))),1:ncol(iris)]
irisValues <- iris[,1:4]
irisTargets <- decodeClassLabels(iris[,5])
iris <- splitForTrainingAndTest(irisValues, irisTargets, ratio=0.15)
iris <- normTrainingAndTestSet(iris)

model <- rbfDDA(iris$inputsTrain, iris$targetsTrain)

summary(model)
plotIterativeError(model)
# }

Run the code above in your browser using DataCamp Workspace