The `potential scale reduction factor' is calculated for each variable in
x
, together with upper and lower confidence limits. Approximate
convergence is diagnosed when the upper limit is close to 1. For
multivariate chains, a multivariate value is calculated that bounds
above the potential scale reduction factor for any linear combination
of the (possibly transformed) variables.
The confidence limits are based on the assumption that the stationary distribution of the variable under examination is normal. Hence the `transform' parameter may be used to improve the normal approximation.
gelman.diag(x, confidence = 0.95, transform=FALSE, autoburnin=TRUE,
multivariate=TRUE)
An mcmc.list
object with more than one chain,
and with starting values that are overdispersed with respect to the
posterior distribution.
the coverage probability of the confidence interval for the potential scale reduction factor
a logical flag indicating whether variables in
x
should be transformed to improve the normality of the
distribution. If set to TRUE, a log transform or logit transform, as
appropriate, will be applied.
a logical flag indicating whether only the second half
of the series should be used in the computation. If set to TRUE
(default) and start(x)
is less than end(x)/2
then start
of series will be adjusted so that only second half of series is used.
a logical flag indicating whether the multivariate potential scale reduction factor should be calculated for multivariate chains
An object of class gelman.diag
. This is a list with the
following elements:
A list containing the point estimates of the potential
scale reduction factor (labelled Point est.
) and their upper
confidence limits (labelled Upper C.I.
).
The point estimate of the multivariate potential scale reduction
factor. This is NULL if there is only one variable in x
.
Gelman and Rubin (1992) propose a general approach to monitoring
convergence of MCMC output in which gelman.diag
diagnostic is applied to a
single variable from the chain. It is based a comparison of within-chain
and between-chain variances, and is similar to a classical analysis of
variance.
There are two ways to estimate the variance of the stationary distribution:
the mean of the empirical variance within each chain,
If the chains have converged, then both estimates are unbiased. Otherwise the first method will underestimate the variance, since the individual chains have not had time to range all over the stationary distribution, and the second method will overestimate the variance, since the starting points were chosen to be overdispersed.
The convergence diagnostic is based on the assumption that the
target distribution is normal. A Bayesian credible interval can
be constructed using a t-distribution with mean
The convergence diagnostic itself is
Gelman, A and Rubin, DB (1992) Inference from iterative simulation using multiple sequences, Statistical Science, 7, 457-511.
Brooks, SP. and Gelman, A. (1998) General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7, 434-455.