
The use of an RBF network is similar to that of an mlp
.
The idea of radial basis function networks comes from function
interpolation theory. The RBF performs a linear combination of
n basis functions that are radially symmetric around a center/prototype.
rbf(x, ...)# S3 method for default
rbf(x, y, size = c(5), maxit = 100,
initFunc = "RBF_Weights", initFuncParams = c(0, 1, 0, 0.02, 0.04),
learnFunc = "RadialBasisLearning", learnFuncParams = c(1e-05, 0, 1e-05,
0.1, 0.8), updateFunc = "Topological_Order", updateFuncParams = c(0),
shufflePatterns = TRUE, linOut = TRUE, inputsTest = NULL,
targetsTest = NULL, ...)
a matrix with training inputs for the network
additional function parameters (currently not used)
the corresponding targets values
number of units in the hidden layer(s)
maximum of iterations to learn
the initialization function to use
the parameters for the initialization function
the learning function to use
the parameters for the learning function
the update function to use
the parameters for the update function
should the patterns be shuffled?
sets the activation function of the output units to linear or logistic
a matrix with inputs to test the network
the corresponding targets for the test input
an rsnns
object.
RBF networks are feed-forward networks with one hidden layer. Their activation is not sigmoid (as in MLP), but radially symmetric (often gaussian). Thereby, information is represented locally in the network (in contrast to MLP, where it is globally represented). Advantages of RBF networks in comparison to MLPs are mainly, that the networks are more interpretable, training ought to be easier and faster, and the network only activates in areas of the feature space where it was actually trained, and has therewith the possibility to indicate that it "just doesn't know".
Initialization of an RBF network can be difficult and require prior knowledge.
Before use of this function, you might want
to read pp 172-183 of the SNNS User Manual 4.2. The initialization is performed in
the current implementation by a call to RBF_Weights_Kohonen(0,0,0,0,0)
and a successive call to the given initFunc
(usually RBF_Weights
).
If this initialization doesn't fit your needs, you should use the RSNNS low-level interface
to implement your own one. Have a look then at the demos/examples.
Also, we note that depending on whether linear or logistic output is chosen,
the initialization parameters have to be different (normally c(0,1,...)
for linear and c(-4,4,...)
for logistic output).
Poggio, T. & Girosi, F. (1989), 'A Theory of Networks for Approximation and Learning'(A.I. Memo No.1140, C.B.I.P. Paper No. 31), Technical report, MIT ARTIFICIAL INTELLIGENCE LABORATORY.
Vogt, M. (1992), 'Implementierung und Anwendung von Generalized Radial Basis Functions in einem Simulator neuronaler Netze', Master's thesis, IPVR, University of Stuttgart. (in German)
Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of T<U+00FC>bingen. http://www.ra.cs.uni-tuebingen.de/SNNS/
Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley. (in German)
# NOT RUN {
demo(rbf_irisSnnsR)
# }
# NOT RUN {
demo(rbf_sin)
# }
# NOT RUN {
demo(rbf_sinSnnsR)
# }
# NOT RUN {
inputs <- as.matrix(seq(0,10,0.1))
outputs <- as.matrix(sin(inputs) + runif(inputs*0.2))
outputs <- normalizeData(outputs, "0_1")
model <- rbf(inputs, outputs, size=40, maxit=1000,
initFuncParams=c(0, 1, 0, 0.01, 0.01),
learnFuncParams=c(1e-8, 0, 1e-8, 0.1, 0.8), linOut=TRUE)
par(mfrow=c(2,1))
plotIterativeError(model)
plot(inputs, outputs)
lines(inputs, fitted(model), col="green")
# }
Run the code above in your browser using DataLab