Learn R Programming

bvhar (version 2.2.2)

var_lm: Fitting Vector Autoregressive Model of Order p Model

Description

This function fits VAR(p) using OLS method.

Usage

var_lm(y, p = 1, include_mean = TRUE, method = c("nor", "chol", "qr"))

# S3 method for varlse print(x, digits = max(3L, getOption("digits") - 3L), ...)

# S3 method for varlse logLik(object, ...)

# S3 method for varlse AIC(object, ...)

# S3 method for varlse BIC(object, ...)

is.varlse(x)

is.bvharmod(x)

# S3 method for varlse knit_print(x, ...)

Value

var_lm() returns an object named varlse

class. It is a list with the following components:

coefficients

Coefficient Matrix

fitted.values

Fitted response values

residuals

Residuals

covmat

LS estimate for covariance matrix

df

Numer of Coefficients

p

Lag of VAR

m

Dimension of the data

obs

Sample size used when training = totobs - p

totobs

Total number of the observation

call

Matched call

process

Process: VAR

type

include constant term (const) or not (none)

design

Design matrix

y

Raw input

y0

Multivariate response matrix

method

Solving method

call

Matched call

It is also a bvharmod class.

Arguments

y

Time series data of which columns indicate the variables

p

Lag of VAR (Default: 1)

include_mean

Add constant term (Default: TRUE) or not (FALSE)

method

Method to solve linear equation system. (nor: normal equation (default), chol: Cholesky, and qr: HouseholderQR)

x

Any object

digits

digit option to print

...

not used

object

A varlse object

Details

This package specifies VAR(p) model as

$$Y_{t} = A_1 Y_{t - 1} + \cdots + A_p Y_{t - p} + c + \epsilon_t$$

If include_type = TRUE, there is constant term. The function estimates every coefficient matrix.

Consider the response matrix \(Y_0\). Let \(T\) be the total number of sample, let \(m\) be the dimension of the time series, let \(p\) be the order of the model, and let \(n = T - p\). Likelihood of VAR(p) has

$$Y_0 \mid B, \Sigma_e \sim MN(X_0 B, I_s, \Sigma_e)$$

where \(X_0\) is the design matrix, and MN is matrix normal distribution.

Then log-likelihood of vector autoregressive model family is specified by

$$\log p(Y_0 \mid B, \Sigma_e) = - \frac{nm}{2} \log 2\pi - \frac{n}{2} \log \det \Sigma_e - \frac{1}{2} tr( (Y_0 - X_0 B) \Sigma_e^{-1} (Y_0 - X_0 B)^T )$$

In addition, recall that the OLS estimator for the matrix coefficient matrix is the same as MLE under the Gaussian assumption. MLE for \(\Sigma_e\) has different denominator, \(n\).

$$\hat{B} = \hat{B}^{LS} = \hat{B}^{ML} = (X_0^T X_0)^{-1} X_0^T Y_0$$ $$\hat\Sigma_e = \frac{1}{s - k} (Y_0 - X_0 \hat{B})^T (Y_0 - X_0 \hat{B})$$ $$\tilde\Sigma_e = \frac{1}{s} (Y_0 - X_0 \hat{B})^T (Y_0 - X_0 \hat{B}) = \frac{s - k}{s} \hat\Sigma_e$$

Let \(\tilde{\Sigma}_e\) be the MLE and let \(\hat{\Sigma}_e\) be the unbiased estimator (covmat) for \(\Sigma_e\). Note that

$$\tilde{\Sigma}_e = \frac{n - k}{n} \hat{\Sigma}_e$$

Then

$$AIC(p) = \log \det \Sigma_e + \frac{2}{n}(\text{number of freely estimated parameters})$$

where the number of freely estimated parameters is \(mk\), i.e. \(pm^2\) or \(pm^2 + m\).

Let \(\tilde{\Sigma}_e\) be the MLE and let \(\hat{\Sigma}_e\) be the unbiased estimator (covmat) for \(\Sigma_e\). Note that

$$\tilde{\Sigma}_e = \frac{n - k}{T} \hat{\Sigma}_e$$

Then

$$BIC(p) = \log \det \Sigma_e + \frac{\log n}{n}(\text{number of freely estimated parameters})$$

where the number of freely estimated parameters is \(pm^2\).

References

Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.

Akaike, H. (1969). Fitting autoregressive models for prediction. Ann Inst Stat Math 21, 243-247.

Akaike, H. (1971). Autoregressive model fitting for control. Ann Inst Stat Math 23, 163-180.

Akaike H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, vol. 19, no. 6, pp. 716-723.

Akaike H. (1998). Information Theory and an Extension of the Maximum Likelihood Principle. In: Parzen E., Tanabe K., Kitagawa G. (eds) Selected Papers of Hirotugu Akaike. Springer Series in Statistics (Perspectives in Statistics). Springer, New York, NY.

Gideon Schwarz. (1978). Estimating the Dimension of a Model. Ann. Statist. 6 (2) 461 - 464.

See Also

  • summary.varlse() to summarize VAR model

Examples

Run this code
# Perform the function using etf_vix dataset
fit <- var_lm(y = etf_vix, p = 2)
class(fit)
str(fit)

# Extract coef, fitted values, and residuals
coef(fit)
head(residuals(fit))
head(fitted(fit))

Run the code above in your browser using DataLab