RSNNS (version 0.4-9)

rbf: Create and train a radial basis function (RBF) network

Description

The use of an RBF network is similar to that of an mlp. The idea of radial basis function networks comes from function interpolation theory. The RBF performs a linear combination of n basis functions that are radially symmetric around a center/prototype.

Usage

rbf(x, ...)
"rbf"(x, y, size = c(5), maxit = 100, initFunc = "RBF_Weights", initFuncParams = c(0, 1, 0, 0.02, 0.04), learnFunc = "RadialBasisLearning", learnFuncParams = c(1e-05, 0, 1e-05, 0.1, 0.8), updateFunc = "Topological_Order", updateFuncParams = c(0), shufflePatterns = TRUE, linOut = TRUE, inputsTest = NULL, targetsTest = NULL, ...)

Arguments

x
a matrix with training inputs for the network
...
additional function parameters (currently not used)
y
the corresponding targets values
size
number of units in the hidden layer(s)
maxit
maximum of iterations to learn
initFunc
the initialization function to use
initFuncParams
the parameters for the initialization function
learnFunc
the learning function to use
learnFuncParams
the parameters for the learning function
updateFunc
the update function to use
updateFuncParams
the parameters for the update function
shufflePatterns
should the patterns be shuffled?
linOut
sets the activation function of the output units to linear or logistic
inputsTest
a matrix with inputs to test the network
targetsTest
the corresponding targets for the test input

Value

an rsnns object.

Details

RBF networks are feed-forward networks with one hidden layer. Their activation is not sigmoid (as in MLP), but radially symmetric (often gaussian). Thereby, information is represented locally in the network (in contrast to MLP, where it is globally represented). Advantages of RBF networks in comparison to MLPs are mainly, that the networks are more interpretable, training ought to be easier and faster, and the network only activates in areas of the feature space where it was actually trained, and has therewith the possibility to indicate that it "just doesn't know".

Initialization of an RBF network can be difficult and require prior knowledge. Before use of this function, you might want to read pp 172-183 of the SNNS User Manual 4.2. The initialization is performed in the current implementation by a call to RBF_Weights_Kohonen(0,0,0,0,0) and a successive call to the given initFunc (usually RBF_Weights). If this initialization doesn't fit your needs, you should use the RSNNS low-level interface to implement your own one. Have a look then at the demos/examples. Also, we note that depending on whether linear or logistic output is chosen, the initialization parameters have to be different (normally c(0,1,...) for linear and c(-4,4,...) for logistic output).

References

Poggio, T. & Girosi, F. (1989), 'A Theory of Networks for Approximation and Learning'(A.I. Memo No.1140, C.B.I.P. Paper No. 31), Technical report, MIT ARTIFICIAL INTELLIGENCE LABORATORY.

Vogt, M. (1992), 'Implementierung und Anwendung von Generalized Radial Basis Functions in einem Simulator neuronaler Netze', Master's thesis, IPVR, University of Stuttgart. (in German)

Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of Tübingen. http://www.ra.cs.uni-tuebingen.de/SNNS/

Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley. (in German)

Examples

Run this code
## Not run: demo(rbf_irisSnnsR)
## Not run: demo(rbf_sin)
## Not run: demo(rbf_sinSnnsR)


inputs <- as.matrix(seq(0,10,0.1))
outputs <- as.matrix(sin(inputs) + runif(inputs*0.2))
outputs <- normalizeData(outputs, "0_1")

model <- rbf(inputs, outputs, size=40, maxit=1000, 
                     initFuncParams=c(0, 1, 0, 0.01, 0.01), 
                     learnFuncParams=c(1e-8, 0, 1e-8, 0.1, 0.8), linOut=TRUE)

par(mfrow=c(2,1))
plotIterativeError(model)
plot(inputs, outputs)
lines(inputs, fitted(model), col="green")

Run the code above in your browser using DataCamp Workspace