The `potential scale reduction factor' is calculated for each variable in
`x`

, together with upper and lower confidence limits. Approximate
convergence is diagnosed when the upper limit is close to 1. For
multivariate chains, a multivariate value is calculated that bounds
above the potential scale reduction factor for any linear combination
of the (possibly transformed) variables.

The confidence limits are based on the assumption that the stationary distribution of the variable under examination is normal. Hence the `transform' parameter may be used to improve the normal approximation.

```
gelman.diag(x, confidence = 0.95, transform=FALSE, autoburnin=TRUE,
multivariate=TRUE)
```

x

An `mcmc.list`

object with more than one chain,
and with starting values that are overdispersed with respect to the
posterior distribution.

confidence

the coverage probability of the confidence interval for the potential scale reduction factor

transform

a logical flag indicating whether variables in
`x`

should be transformed to improve the normality of the
distribution. If set to TRUE, a log transform or logit transform, as
appropriate, will be applied.

autoburnin

a logical flag indicating whether only the second half
of the series should be used in the computation. If set to TRUE
(default) and `start(x)`

is less than `end(x)/2`

then start
of series will be adjusted so that only second half of series is used.

multivariate

a logical flag indicating whether the multivariate potential scale reduction factor should be calculated for multivariate chains

An object of class `gelman.diag`

. This is a list with the
following elements:

A list containing the point estimates of the potential
scale reduction factor (labelled `Point est.`

) and their upper
confidence limits (labelled `Upper C.I.`

).

The point estimate of the multivariate potential scale reduction
factor. This is NULL if there is only one variable in `x`

.

Gelman and Rubin (1992) propose a general approach to monitoring
convergence of MCMC output in which \(m > 1\) parallel chains are run
with starting values that are overdispersed relative to the posterior
distribution. Convergence is diagnosed when the chains have `forgotten'
their initial values, and the output from all chains is
indistinguishable. The `gelman.diag`

diagnostic is applied to a
single variable from the chain. It is based a comparison of within-chain
and between-chain variances, and is similar to a classical analysis of
variance.

There are two ways to estimate the variance of the stationary distribution: the mean of the empirical variance within each chain, \(W\), and the empirical variance from all chains combined, which can be expressed as $$ \widehat{\sigma}^2 = \frac{(n-1) W }{n} + \frac{B}{n} $$ where \(n\) is the number of iterations and \(B/n\) is the empirical between-chain variance.

If the chains have converged, then both estimates are
unbiased. Otherwise the first method will *underestimate* the
variance, since the individual chains have not had time to range all
over the stationary distribution, and the second method will
*overestimate* the variance, since the starting points were chosen
to be overdispersed.

The convergence diagnostic is based on the assumption that the target distribution is normal. A Bayesian credible interval can be constructed using a t-distribution with mean $$\widehat{\mu}=\mbox{Sample mean of all chains combined}$$ and variance $$\widehat{V}=\widehat{\sigma}^2 + \frac{B}{mn}$$ and degrees of freedom estimated by the method of moments $$d = \frac{2*\widehat{V}^2}{\mbox{Var}(\widehat{V})}$$ Use of the t-distribution accounts for the fact that the mean and variance of the posterior distribution are estimated.

The convergence diagnostic itself is $$R=\sqrt{\frac{(d+3) \widehat{V}}{(d+1)W}}$$ Values substantially above 1 indicate lack of convergence. If the chains have not converged, Bayesian credible intervals based on the t-distribution are too wide, and have the potential to shrink by this factor if the MCMC run is continued.

Gelman, A and Rubin, DB (1992) Inference from iterative simulation
using multiple sequences, *Statistical Science*, **7**, 457-511.

Brooks, SP. and Gelman, A. (1998) General methods for monitoring
convergence of iterative simulations. *Journal of Computational and
Graphical Statistics*, **7**, 434-455.