If you use the package, please cite the following work in your publications:
Bergmeir, C. and Benítez, J.M. (2012), Neural Networks in
R Using the Stuttgart Neural Network Simulator: RSNNS.
Journal of Statistical Software, 46(7), 1-26.
The package has a hierarchical architecture with three levels:
Many demos for using both low-level and high-level api of the package are available. To get a list of them, type:
library(RSNNS)
demo()
It is a good idea to start with the demos of the high-level api (which is much more convenient to use). E.g., to access the iris classification demo type:
demo(iris)
or for the laser regression demo type:
demo(laser)
As the high-level api is already quite powerful and
flexible, you'll most probably normally end up using one
of the functions: mlp, dlvq,
rbf, rbfDDA,
elman, jordan,
som, art1,
art2, artmap, or
assoz, with some pre- and postprocessing.
These S3 classes are all subclasses of
rsnns.
You might also want to have a look at the original SNNS program and the SNNS User Manual 4.2, especially pp 67-87 for explications on all the parameters of the learning functions, and pp 145-215 for detailed (theoretical) explications of the methods and advice on their use. And, there is also the javaNNS, the sucessor of SNNS from the original authors. It makes the C core functionality available from a Java GUI.
Demos ending with "SnnsR" show the use of the low-level api. If you want to do special things with neural networks that are currently not implemented in the high-level api, you can see in this demos how to do it. Many demos are present both as high-level and low-level versions.
The low-level api consists mainly of the class
SnnsR-class, which internally holds a
pointer to a C++ object of the class SnnsCLib,
i.e., an instance of the SNNS kernel. The class
furthermore implements a calling mechanism for methods of
the SnnsCLib object, so that they can be called
conveniently using the "$"-operator. This calling
mechanism also allows for transparent masking of methods
or extending the kernel with new methods from within R.
See $,SnnsR-method. R-functions that are
added by RSNNS to the kernel are documented in this
manual under topics beginning with SnnsRObject$.
Documentation of the original SNNS kernel user interface
functions can be found in the SNNS User Manual 4.2 pp
290-314. A call to, e.g., the SNNS kernel function
krui_getNoOfUnits(...) can be done with
SnnsRObject$getNoOfUnits(...). However, a few
functions were excluded from the wrapping for various
reasons. Fur more details and other known issues see the
file /inst/doc/KnownIssues.
Most of the example data included in SNNS is also present
in this package, see snnsData.
Additional information is also available at the project website:
General neural network literature:
Bishop, C. M. (2003), Neural networks for pattern recognition, University Press, Oxford.
Haykin, S. S. (1999), Neural networks :a comprehensive foundation, Prentice Hall, Upper Saddle River, NJ.
Kriesel, D. ( 2007 ), A Brief Introduction to Neural Networks. http://www.dkriesel.com
Ripley, B. D. (2007), Pattern recognition and neural networks, Cambridge University Press, Cambridge.
Rojas, R. (1996), Neural networks :a systematic introduction, Springer-Verlag, Berlin.
Rumelhart, D. E.; Clelland, J. L. M. & Group, P. R. (1986), Parallel distributed processing :explorations in the microstructure of cognition, Mit, Cambridge, MA etc..
Literature on the original SNNS software:
Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network
Simulator User Manual, Version 4.2', IPVR, University of
Stuttgart and WSI, University of Tübingen.
javaNNS, the sucessor of the original SNNS with a Java
GUI:
Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley.
Other resources:
A function to plot networks from the mlp
function:
mlp, dlvq, rbf,
rbfDDA, elman,
jordan, som,
art1, art2,
artmap, assoz