DChaos (version 0.1-1)

jacobi: Application of Jacobian method by a fit through neural networks

Description

This function estimates the partial derivatives of the jacobian by a fit through a single-hidden-layer neural network considering the argument set selected by the user.

Usage

jacobi(x, lag = 1, timelapse = "FIXED", M0 = 3, M1 = 10, H0 = 2,
  H1 = 10, I = 100, pre.white = TRUE, doplot = TRUE)

Arguments

x

a numeric vector, time serie, data frame or matrix depending on the method selected in timelapse.

lag

a non-negative integer denoting the reconstruction delay (Default 1).

timelapse

a character denoting if you consider that the observations are sampled at uniform time intervals FIXED or with a variable time-lapse between each observation VARIABLE (Default FIXED).

M0

a non-negative integer denoting a lower bound for the embedding dimension (Default 3).

M1

a non-negative integer denoting an upper bound for the embedding dimension (Default 10).

H0

a non-negative integer denoting a lower bound for the number of neurones in the hidden layer (Default 2).

H1

a non-negative integer denoting an upper bound for the number of neurones in the hidden layer (Default 10).

I

a non-negative integer denoting a number of neural networks iterations (Default 100).

pre.white

a character denoting if you want to use as points to evaluate the partial derivatives the delayed vectors filtered by the neural network TRUE or not FALSE (Default TRUE).

doplot

a logical value denoting if you want to draw a plot TRUE or not FALSE. If it is TRUE shows as many graphs as networks have been considered. Each of them represents the network structure by drawing the weights with positive values in black and the weights with negative values in grey. The thickness of the lines represents a greater or lesser value (Default TRUE).

Value

A list with several objects. The first output is a matrix called Network.set. It contains the networks that have the best fit for each embedding dimension m. That is, the neural networks that have the minimum bayesian information criterion (BIC) between all possible number of neurones in the hidden layer. Then, the partial derivatives of the jacobian are saved on a data frame for each neural network structure considered by keeping to Jacobian.net.

Details

If FIXED has been selected x must be a numeric vector or time serie. Otherwise VARIABLE has to be specified. In this case x must be a data frame or matrix with two columns. First, the date with the following format YMD H:M:OS3 considering milliseconds e.g., 20190407 00:00:03.347. If you don't consider milliseconds you must put .000 after the seconds. It should be an object of class Factor. Second, the univariate time serie as a sequence of numerical values.

References

Eckmann, J.P., Ruelle, D. 1985 Ergodic theory of chaos and strange attractors. Reviews of Modern Physics 57:617-656.

Eckmann, J.P., Kamphorst, S.O., Ruelle, D., Ciliberto, S. 1986 Liapunov exponents from time series. Physical Review A 34:971-979.

Hornik, K., Stinchcombe, M., White, H. 1989 Multilayer feedforward networks are universal approximators. Neural Networks 2(5):359-366.

Gencay, R., Dechert, W. 1992 An algorithm for the n lyapunov exponents of an n-dimensional unknown dynamical system. Physica D 59(1):142-157.

McCaffrey, D.F., Ellner, S., Gallant, A.R., Nychka, D.W. 1992 Estimating the lyapunov exponent of a chaotic system with nonparametric regression. Journal of the American Statistical Association 87(419):682-695.

Nychka, D., Ellner, S., Gallant, A.R., McCaffrey, D. 1992 Finding chaos in noisy systems. Journal of the Royal Statistical Society 54(2):399-426.

Kuan, C., Liu, T., Gencay, R. 2004 Netfile 4.01: Feedforward neural networks and Lyapunov exponents estimation. Ball State University.

See Also

embedding

Examples

Run this code
# NOT RUN {
## We show below an example considering time series from the
## logistic equation. The first objetc is a matrix called
## Network.set. It contains the networks that have the best
## fit for each embedding dimension (3<m<4).
data<-logistic.ts(u.min=4,u.max=4,B=100,doplot=FALSE)
ts<-data$`Logistic 100`$time.serie
jacob<-jacobi(ts,lag=1,timelapse="FIXED",M0=3,M1=4,
       H0=3,H1=7,I=10,pre.white=TRUE,doplot=FALSE)
show(jacob$Network.set)
## The partial derivatives of the jacobian are saved on a
## data frame for each neural network structure considered
## by keeping to Jacobian.net. The first ten jacobian values
## corresponding to the neural network for m=4 are showed.
show(head(jacob$Jacobian.net2, 10))
# }

Run the code above in your browser using DataCamp Workspace