ar(x, aic = TRUE, order.max = NULL, method = c("yule-walker", "burg", "ols", "mle", "yw"), na.action, series, ...)
ar.burg(x, ...)
"ar.burg"(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, var.method = 1, ...)
"ar.burg"(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, var.method = 1, ...)
ar.yw(x, ...)
"ar.yw"(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, ...)
"ar.yw"(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, var.method = 1, ...)
ar.mle(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, ...)
"predict"(object, newdata, n.ahead = 1, se.fit = TRUE, ...)TRUE then the Akaike Information
    Criterion is used to choose the order of the autoregressive
    model. If FALSE, the model of order order.max is
    fitted.method = "mle" where it is the minimum of this
    quantity and 12."yule-walker".deparse(substitute(x)).ar.ar and its methods a list of class "ar" with
  the following elements:
  aic = TRUE, otherwise it is order.max.ar.ols only.) The intercept in the model for
    x - x.mean.-Inf.order.max argument.order.max.order observations. The first order residuals
    are set to NA. If x is a time series, so is resid.method argument.order > 0.)
    The asymptotic-theory variance matrix of the coefficient estimates.predict.ar, a time series of predictions, or if
  se.fit = TRUE, a list with components pred, the
  predictions, and se, the estimated standard errors. Both
  components are time series.
$$x_t - \mu = a_1(x_{t-1} - \mu) + \cdots + a_p(x_{t-p} - \mu) + e_t$$
  ar is just a wrapper for the functions ar.yw,
  ar.burg, ar.ols and ar.mle.
  Order selection is done by AIC if aic is true. This is
  problematic, as of the methods here only ar.mle performs
  true maximum likelihood estimation. The AIC is computed as if the variance
  estimate were the MLE, omitting the determinant term from the
  likelihood. Note that this is not the same as the Gaussian likelihood
  evaluated at the estimated parameter values. In ar.yw the
  variance matrix of the innovations is computed from the fitted
  coefficients and the autocovariance of x.
  ar.burg allows two methods to estimate the innovations
  variance and hence AIC. Method 1 is to use the update given by
  the Levinson-Durbin recursion (Brockwell and Davis, 1991, (8.2.6)
  on page 242), and follows S-PLUS. Method 2 is the mean of the sum
  of squares of the forward and backward prediction errors
  (as in Brockwell and Davis, 1996, page 145). Percival and Walden
  (1998) discuss both. In the multivariate case the estimated
  coefficients will depend (slightly) on the variance estimation method.
  Remember that ar includes by default a constant in the model, by
  removing the overall mean of x before fitting the AR model,
  or (ar.mle) estimating a constant to subtract.
Brockwell, P. J. and Davis, R. A. (1996) Introduction to Time Series and Forecasting. Springer, New York. Sections 5.1 and 7.6.
Percival, D. P. and Walden, A. T. (1998) Spectral Analysis for Physical Applications. Cambridge University Press.
Whittle, P. (1963) On the fitting of multivariate autoregressions and the approximate canonical factorization of a spectral density matrix. Biometrika 40, 129--134.
ar.ols, arima for ARMA models;
  acf2AR, for AR construction from the ACF.  arima.sim for simulation of AR processes.
ar(lh)
ar(lh, method = "burg")
ar(lh, method = "ols")
ar(lh, FALSE, 4) # fit ar(4)
(sunspot.ar <- ar(sunspot.year))
predict(sunspot.ar, n.ahead = 25)
## try the other methods too
ar(ts.union(BJsales, BJsales.lead))
## Burg is quite different here, as is OLS (see ar.ols)
ar(ts.union(BJsales, BJsales.lead), method = "burg")
Run the code above in your browser using DataLab