MCMCpack (version 1.4-9)

HDPHMMpoisson: Markov Chain Monte Carlo for sticky HDP-HMM with a Poisson outcome distribution

Description

This function generates a sample from the posterior distribution of a (sticky) HDP-HMM with a Poisson outcome distribution (Fox et al, 2011). The user supplies data and priors, and a sample from the posterior distribution is returned as an mcmc object, which can be subsequently analyzed with functions provided in the coda package.

Usage

HDPHMMpoisson(
  formula,
  data = parent.frame(),
  K = 10,
  b0 = 0,
  B0 = 1,
  a.alpha = 1,
  b.alpha = 0.1,
  a.gamma = 1,
  b.gamma = 0.1,
  a.theta = 50,
  b.theta = 5,
  burnin = 1000,
  mcmc = 1000,
  thin = 1,
  verbose = 0,
  seed = NA,
  beta.start = NA,
  P.start = NA,
  gamma.start = 0.5,
  theta.start = 0.98,
  ak.start = 100,
  ...
)

Arguments

formula

Model formula.

data

Data frame.

K

The number of regimes under consideration. This should be larger than the hypothesized number of regimes in the data. Note that the sampler will likely visit fewer than K regimes.

b0

The prior mean of \(\beta\). This can either be a scalar or a column vector with dimension equal to the number of betas. If this takes a scalar value, then that value will serve as the prior mean for all of the betas.

B0

The prior precision of \(\beta\). This can either be a scalar or a square matrix with dimensions equal to the number of betas. If this takes a scalar value, then that value times an identity matrix serves as the prior precision of beta. Default value of 0 is equivalent to an improper uniform prior for beta.

a.alpha, b.alpha

Shape and scale parameters for the Gamma distribution on \(\alpha + \kappa\).

a.gamma, b.gamma

Shape and scale parameters for the Gamma distribution on \(\gamma\).

a.theta, b.theta

Paramaters for the Beta prior on \(\theta\), which captures the strength of the self-transition bias.

burnin

The number of burn-in iterations for the sampler.

mcmc

The number of Metropolis iterations for the sampler.

thin

The thinning interval used in the simulation. The number of mcmc iterations must be divisible by this value.

verbose

A switch which determines whether or not the progress of the sampler is printed to the screen. If verbose is greater than 0 the iteration number, the current beta vector, and the Metropolis acceptance rate are printed to the screen every verboseth iteration.

seed

The seed for the random number generator. If NA, the Mersenne Twister generator is used with default seed 12345; if an integer is passed it is used to seed the Mersenne twister. The user can also pass a list of length two to use the L'Ecuyer random number generator, which is suitable for parallel computation. The first element of the list is the L'Ecuyer seed, which is a vector of length six or NA (if NA a default seed of rep(12345,6) is used). The second element of list is a positive substream number. See the MCMCpack specification for more details.

beta.start

The starting value for the \(\beta\) vector. This can either be a scalar or a column vector with dimension equal to the number of betas. If this takes a scalar value, then that value will serve as the starting value for all of the betas. The default value of NA will use the maximum likelihood estimate of \(\beta\) as the starting value for all regimes.

P.start

Initial transition matrix between regimes. Should be a K by K matrix. If not provided, the default value will be place theta.start along the diagonal and the rest of the mass even distributed within rows.

theta.start, ak.start, gamma.start

Scalar starting values for the \(\theta\), \(\alpha + \kappa\), and \(\gamma\) parameters.

...

further arguments to be passed.

Value

An mcmc object that contains the posterior sample. This object can be summarized by functions provided by the coda package.

Details

HDPHMMpoisson simulates from the posterior distribution of a sticky HDP-HMM with a Poisson outcome distribution, allowing for multiple, arbitrary changepoints in the model. The details of the model are discussed in Blackwell (2017). The implementation here is based on a weak-limit approximation, where there is a large, though finite number of regimes that can be switched between. Unlike other changepoint models in MCMCpack, the HDP-HMM approach allows for the state sequence to return to previous visited states.

The model takes the following form, where we show the fixed-limit version:

$$y_t \sim \mathcal{P}oisson(\mu_t)$$

$$\mu_t = x_t ' \beta_m,\;\; m = 1, \ldots, M$$

Where \(M\) is an upper bound on the number of states and \(\beta_m\) are parameters when a state is \(m\) at \(t\).

The transition probabilities between states are assumed to follow a heirarchical Dirichlet process:

$$\pi_m \sim \mathcal{D}irichlet(\alpha\delta_1, \ldots, \alpha\delta_j + \kappa, \ldots, \alpha\delta_M)$$

$$\delta \sim \mathcal{D}irichlet(\gamma/M, \ldots, \gamma/M)$$

The \(\kappa\) value here is the sticky parameter that encourages self-transitions. The sampler follows Fox et al (2011) and parameterizes these priors with \(\alpha + \kappa\) and \(\theta = \kappa/(\alpha + \kappa)\), with the latter representing the degree of self-transition bias. Gamma priors are assumed for \((\alpha + \kappa)\) and \(\gamma\).

We assume Gaussian distribution for prior of \(\beta\):

$$\beta_m \sim \mathcal{N}(b_0,B_0^{-1}),\;\; m = 1, \ldots, M$$

The model is simulated via blocked Gibbs conditonal on the states. The \(\beta\) being simulated via the auxiliary mixture sampling method of Fuerhwirth-Schanetter et al. (2009). The states are updated as in Fox et al (2011), supplemental materials.

References

Andrew D. Martin, Kevin M. Quinn, and Jong Hee Park. 2011. ``MCMCpack: Markov Chain Monte Carlo in R.'', Journal of Statistical Software. 42(9): 1-21. http://www.jstatsoft.org/v42/i09/.

Daniel Pemstein, Kevin M. Quinn, and Andrew D. Martin. 2007. Scythe Statistical Library 1.0. http://scythe.lsa.umich.edu.

Sylvia Fruehwirth-Schnatter, Rudolf Fruehwirth, Leonhard Held, and Havard Rue. 2009. ``Improved auxiliary mixture sampling for hierarchical models of non-Gaussian data'', Statistics and Computing 19(4): 479-492. http://doi.org/10.1007/s11222-008-9109-4

Matthew Blackwell. 2017. ``Game Changers: Detecting Shifts in Overdispersed Count Data,'' Political Analysis Forthcoming. http://www.mattblackwell.org/files/papers/gamechangers-letter.pdf

Emily B. Fox, Erik B. Sudderth, Michael I. Jordan, and Alan S. Willsky. 2011.. ``A sticky HDP-HMM with application to speaker diarization.'' The Annals of Applied Statistics, 5(2A), 1020-1056. http://doi.org/10.1214/10-AOAS395SUPP

See Also

MCMCpoissonChange, HDPHMMnegbin

Examples

Run this code
# NOT RUN {
 
# }
# NOT RUN {
   n <- 150
   reg <- 3
   true.s <- gl(reg, n/reg, n)
   b1.true <- c(1, -2, 2)
   x1 <- runif(n, 0, 2)
   mu <- exp(1 + x1 * b1.true[true.s])
   y <- rpois(n, mu)

   posterior <- HDPHMMpoisson(y ~ x1, K = 10, verbose = 1000,
                          a.theta = 100, b.theta = 1,
                          b0 = rep(0, 2), B0 = (1/9) * diag(2),
                          seed = list(NA, 2),
                          theta.start = 0.95, gamma.start = 10,
                          ak.start = 10)

   plotHDPChangepoint(posterior, ylab="Density", start=1)
   
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab