mcmc
object, which can be subsequently analyzed with functions
provided in the coda package.MCMCoprobit(formula, data = list(), burnin = 1000, mcmc = 10000,
thin=5, tune = NA, verbose = FALSE, seed = 0, beta.start = NA,
b0 = 0, B0 = 0.001, ...)
mcmc
object that contains the posterior density sample. This
object can be summarized by functions provided by the coda package.MCMCoprobit
simulates from the posterior density of a ordered probit
regression model using data augmentation. The simulation proper is
done in compiled C++ code to maximize efficiency. Please consult the
coda documentation for a comprehensive list of functions that can be
used to analyze the posterior density sample.
The observed variable $y_i$ is ordinal with a total of $C$
categories, with distribution
governed by a latent variable:
$$z_i = x_i'\beta + \varepsilon_i$$
The errors are assumed to be from a standard Normal distribution. The
probabilities of observing each outcome is governed by this latent
variable and $C-1$ estimable cutpoints, which are denoted
$\gamma_c$. The probability that individual $i$
is in category $c$ is computed by:
$$\pi_{ic} = \Phi(\gamma_c - x_i'\beta) - \Phi(\gamma_{c-1} - x_i'\beta)$$
These probabilities are used to form the multinomial distribution
that defines the likelihoods.
The algorithm employed is discussed in depth by Cowles (1996). Note that
the model does include a constant in the data matrix. Thus, the first
element $\gamma_1$ is normalized to zero, and thus not
returned in the mcmc
object.plot.mcmc
,summary.mcmc
x1 <- rnorm(100); x2 <- rnorm(100);
z <- 1.0 + x1*0.1 - x2*0.5 + rnorm(100);
y <- z; y[z < 0] <- 0; y[z >= 0 & z < 1] <- 1;
y[z >= 1 & z < 1.5] <- 2; y[z >= 1.5] <- 3;
posterior <- MCMCoprobit(y ~ x1 + x2, tune=0.3, mcmc=20000)
Run the code above in your browser using DataLab