Learn R Programming

MSBVAR (version 0.4.0)

gibbs.msbvar: Gibbs sampler for a Markov-switching Bayesian reduced form vector autoregression model

Description

Draws a Bayesian posterior sample for a Markov-switching Bayesian reduced form vector autoregression model based on the setup from the msbvar function.

Usage

gibbs.msbvar(x, N1 = 1000, N2 = 1000, permute = TRUE,
             upper.idx = NULL, lower.idx = NULL)

Arguments

Value

A list summarizing the reduced form MSBVAR posterior:Beta.sample$N2 \times h(m^2 p + m)$ of the BVAR regression coefficients for each regime. The ordering is based on regime, equation, intercept (and in the future covariates). So the first $p$ coefficients are the the first equation in the first regime, ordered by lag, not variable; the next is the intercept. This pattern repeats for the remaining coefficents across the regimes.Sigma.sample$N2 \times h(\frac{m(m+1)}{2})$ matrix of the covariance parameters for the error covariances $\Sigma_h$. Since these matrices are symmetric p.d., we only store the upper (or lower) portion. The elements in the matrix are the first, second, etc. columns / rows of the lower / upper version of the matrix.Q.sample$N2 \times h^2$ss.sampleList of class SS for the N2 estimates of the state-space matrices coded as bit objects for compression / efficiency.

Details

This function implements a Gibbs sampler for the posterior of a MSBVAR model setup with msbvar. This is a reduced form MSBVAR model. The estimation is done in a mixture of native R code and C++. The sampling of the BVAR coefficients, the transition matrix, and the error covariances for each regime are done in native R code. The forward-filtering-backward-sampling of the Markov-switching process (The most computationally intensive part of the estimation) is handled in compiled C++ code. As such, this model is reasonably fast for small samples / small numbers of regimes (say less than 2000 observations and 2-4 regimes). The reason for this mixed implementation is that it is easier to setup variants of the model (some coefficients switching, others not; different sampling methods; etc.)

The random permuation of the states is done using a multinomial step: at each draw of the Gibbs sampler, the states are permuted using a multinomial draw. This generates a posterior sample where the states are unidentified. This makes sense, since the user may have little idea of how to select among the h! posterior models of the reduced form MSBVAR model (see e.g., Fruhwirth-Schnatter (2006)). Once a posterior sample has been draw with random permuation, a clustering algorithm can be used to identify the states, for example, by examining the intercepts or covariances across the regimes (see the example below for details).

The Gibbs sampler is estimated using six steps: [object Object],[object Object],[object Object],[object Object],[object Object],[object Object] The state-space for the MS process is a $T \times h$ matrix of zeros and ones. Since this matrix classifies the observations infor states for the N2 posterior draws, it does not make sense to store it in double precisions. We use the bit package to compress this matrix into a 2-bit integer representation for more efficient storage. Functions are provided (see below) for summarizing and plotting the resulting state-space of the MS process.

Talk about permutation and why we need it and why you need to estimate the model twice!

References

Brandt, Patrick T. 2009. "Empirical, Regime-Specific Models of International, Inter-group Conflict, and Politics" Sims, Christopher A. and Daniel F. Waggoner and Tao Zha. 2008. "Methods for inference in large multiple-equation Markov-switching models" Journal of Econometrics 146(2):255--274. Krolzig, Hans-Martin. 1997. Markov-Switching Vector Autoregressions: Modeling, Statistical Inference, and Application to Business Cycle Analysis.

See Also

msbvar, plot.SS, mean.SS