VGAM (version 1.0-4)

AR1EIM: Computation of the Exact EIM of an Order-1 Autoregressive Process

Description

Computation of the exact Expected Information Matrix of the Autoregressive process of order-\(1\) (AR(\(1\))) with Gaussian white noise and stationary random components.

Usage

AR1EIM(x = NULL, var.arg = NULL, p.drift = NULL,
       WNsd = NULL, ARcoeff1 = NULL, eps.porat = 1e-2)

Arguments

x

A vector of quantiles. The gaussian time series for which the EIMs are computed.

If multiple time series are being analyzed, then x must be a matrix where each column allocates a response. That is, the number of columns (denoted as \(NOS\)) must match the number of responses.

var.arg

Logical. Same as with AR1.

p.drift

A numeric vector with the scaled mean(s) (commonly referred as drift) of the AR process(es) in turn. Its length matches the number of responses.

WNsd, ARcoeff1

Matrices. The standard deviation of the white noise, and the correlation (coefficient) of the AR(\(1\)) model, for each observation.

That is, the dimension for each matrix is \(N \times NOS\), where \(N\) is the number of observations and \(NOS\) is the number of responses. Else, these arguments are recycled.

eps.porat

A very small positive number to test whether the standar deviation (WNsd) is close enough to its value estimated in this function.

See below for further details.

Value

An array of dimension \(N \times NOS \times 6\), as above.

This array stores the EIMs calculated from the joint density as a function of $$\boldsymbol{\theta} = (\mu^*, \sigma^2, \rho). $$

Nevertheless, note that, under the VGAM/VGLM approach, the EIMs must be correspondingly calculated in terms of the linear predictors, \(\boldsymbol{\eta}\).

Asymptotic behaviour of the algorithm

For large enough \(n\), the EIMs, \(J_n(\boldsymbol{\theta})\), become approximately linear in \(n\). That is, for some \(n_0\),

$$ J_n(\boldsymbol{\theta}) \equiv J_{n_0}(\boldsymbol{\theta}) + (n - n_0) \bar{J}(\boldsymbol{\theta}),~~~~~~(**) $$ where \( \bar{J}(\boldsymbol{\theta}) \) is a constant matrix.

This relationsihip is internally considered if a proper value of \(n_0\) is determined. Different ways can be adopted to find \(n_0\). In AR1EIM, this is done by checking the difference between the internally estimated variances and the entered ones at WNsd. If this difference is less than eps.porat at some iteration, say at iteration \(n_0\), then AR1EIM takes \( \bar{J}(\boldsymbol{\theta})\) as the last computed increment of \(J_n(\boldsymbol{\theta})\), and extraplotates \(J_k(\boldsymbol{\theta})\), for all \(k \geq n_0 \) using \((*)\). Else, the algorithm will complete the iterations for \(1 \leq n \leq N\).

Finally, note that the rate of convergence reasonably decreases if the asymptotic relationship \((*)\) is used to compute \(J_k(\boldsymbol{\theta})\), \(k \geq n_0 \). Normally, the number of operations involved on this algorithm is proportional to \(N^2\).

See Porat and Friedlander (1986) for full details on the asymptotic behaviour of the algorithm.

Warning

Arguments WNsd, and ARcoeff1 are matrices of dimension \(N \times NOS\). Else, these arguments are accordingly recycled.

Details

This function implements the algorithm of Porat and Friedlander (1986) to recursively compute the exact expected information matrix (EIM) of Gaussian time series with stationary random components.

By default, when the VGLM/VGAM family function AR1 is used to fit an AR(\(1\)) model via vglm, Fisher scoring is executed using the approximate EIM for the AR process. However, this model can also be fitted using the exact EIMs computed by AR1EIM.

Given \(N\) consecutive data points, \( {y_{0}, y_{1}, \ldots, y_{N - 1} } \) with probability density \(f(\boldsymbol{y})\), the Porat and Friedlander algorithm calculates the EIMs \( [J_{n-1}(\boldsymbol{\theta})] \), for all \(1 \leq n \leq N\). This is done based on the Levinson-Durbin algorithm for computing the orthogonal polynomials of a Toeplitz matrix. In particular, for the AR(\(1\)) model, the vector of parameters to be estimated under the VGAM/VGLM approach is

$$ \boldsymbol{\eta} = (\mu^{*}, loge(\sigma^2), rhobit(\rho)),$$ where \(\sigma^2\) is the variance of the white noise and \(mu^{*}\) is the drift parameter (See AR1 for further details on this).

Consequently, for each observation \(n = 1, \ldots, N\), the EIM, \(J_{n}(\boldsymbol{\theta})\), has dimension \(3 \times 3\), where the diagonal elements are: $$ J_{[n, 1, 1]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial ( \mu^{*} )^2 ], $$

$$ J_{[n, 2, 2]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial (\sigma^2)^2 ], $$

and

$$ J_{[n, 3, 3]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial ( \rho )^2 ]. $$

As for the off-diagonal elements, one has the usual entries, i.e., $$ J_{[n, 1, 2]} = J_{[n, 2, 1]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial \sigma^2 \partial \rho], $$ etc.

If var.arg = FALSE, then \(\sigma\) instead of \(\sigma^2\) is estimated. Therefore, \(J_{[n, 2, 2]}\), \(J_{[n, 1, 2]}\), etc., are correspondingly replaced.

Once these expected values are internally computed, they are returned in an array of dimension \(N \times 1 \times 6\), of the form

$$J[, 1, ] = [ J_{[ , 1, 1]}, J_{[ , 2, 2]}, J_{[ , 3, 3]}, J_{[ , 1, 2]}, J_{[, 2, 3]}, J_{[ , 1, 3]} ]. $$

AR1EIM handles multiple time series, say \(NOS\). If this happens, then it accordingly returns an array of dimension \(N \times NOS \times 6 \). Here, \(J[, k, ]\), for \(k = 1, \ldots, NOS\), is a matrix of dimension \(N \times 6\), which stores the EIMs for the \(k^{th}\)th response, as above, i.e.,

$$J[, k, ] = [ J_{[ , 1, 1]}, J_{[ , 2, 2]}, J_{[ , 3, 3]}, \ldots ], $$

the bandwith form, as per required by AR1.

References

Porat, B. and Friedlander, B. (1986) Computation of the Exact Information Matrix of Gaussian Time Series with Stationary Random Components. IEEE Transactions on Acoustics, Speech, and Signal Processing, 54(1), 118--130.

See Also

AR1.

Examples

Run this code
# NOT RUN {
  set.seed(1)
  nn <- 500
  ARcoeff1 <- c(0.3, 0.25)        # Will be recycled.
  WNsd     <- c(exp(1), exp(1.5)) # Will be recycled.
  p.drift  <- c(0, 0)             # Zero-mean gaussian time series.

  ### Generate two (zero-mean) AR(1) processes ###
  ts1 <- p.drift[1]/(1 - ARcoeff1[1]) +
                   arima.sim(model = list(ar = ARcoeff1[1]), n = nn,
                   sd = WNsd[1])
  ts2 <- p.drift[2]/(1 - ARcoeff1[2]) +
                   arima.sim(model = list(ar = ARcoeff1[2]), n = nn,
                   sd = WNsd[2])

  ARdata <- matrix(cbind(ts1, ts2), ncol = 2)


  ### Compute the exact EIMs: TWO responses. ###
  ExactEIM <- AR1EIM(x = ARdata, var.arg = FALSE, p.drift = p.drift,
                           WNsd = WNsd, ARcoeff1 = ARcoeff1)

  ### For response 1:
  head(ExactEIM[, 1 ,])      # NOTICE THAT THIS IS A (nn x 6) MATRIX!

  ### For response 2:
  head(ExactEIM[, 2 ,])      # NOTICE THAT THIS IS A (nn x 6) MATRIX!
# }

Run the code above in your browser using DataCamp Workspace