Function to initialize the weights and biases in a neural network. It uses the Nguyen-Widrow (1990) algorithm.
initnw(neurons,p,n,npar)
Number of neurons.
Number of predictors.
Number of cases.
Number of parameters to be estimate including only weights and biases, and should be equal to \(neurons \times (1+1+p)+1\).
A list containing initial values for weights and biases. The first \(s\) components of the list contains vectors with the initial values for the weights and biases of the \(k\)-th neuron, i.e. \((\omega_k, b_k, \beta_1^{(k)},...,\beta_p^{(k)})'\).
The algorithm is described in Nguyen-Widrow (1990) and in other books, see for example Sivanandam and Sumathi (2005). The algorithm is briefly described below.
1.-Compute the scaling factor \(\theta=0.7 p^{1/n}\).
2.- Initialize the weight and biases for each neuron at random, for example generating random numbers from \(U(-0.5,0.5)\).
3.- For each neuron:
compute \(\eta_k=\sqrt{\sum_{j=1}^p (\beta_j^{(k)})^2}\),
update \((\beta_1^{(k)},...,\beta_p^{(k)})'\), $$\beta_j^{(k)}=\frac{\theta \beta_j^{(k)}}{\eta_k}, j=1,...,p,$$
Update the bias \((b_k)\) generating a random number from \(U(-\theta,\theta)\).
Nguyen, D. and Widrow, B. 1990. "Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights", Proceedings of the IJCNN, 3, 21-26.
Sivanandam, S.N. and Sumathi, S. 2005. Introduction to Neural Networks Using MATLAB 6.0. Ed. McGraw Hill, First edition.
# NOT RUN { #Load the library library(brnn) #Set parameters neurons=3 p=4 n=10 npar=neurons*(1+1+p)+1 initnw(neurons=neurons,p=p,n=n,npar=npar) # }