Compute the Deviance Information Criterion (DIC) or
Watanabe-Akaike Information Criterion (WAIC) from an
object of class draws
output by MCMCsim
.
Method waic.draws
computes WAIC using package loo.
Method loo.draws
also depends on package loo to compute
a Pareto-smoothed importance sampling (PSIS) approximation
to leave-one-out cross-validation.
compute_DIC(x, use.pV = FALSE)compute_WAIC(x, diagnostic = FALSE, batch.size = NULL, show.progress = TRUE)
# S3 method for draws
waic(x, by.unit = FALSE, ...)
# S3 method for draws
loo(x, r_eff = FALSE, n.cores = 1L, ...)
an object of class draws
.
whether half the posterior variance of the deviance should be used as an alternative estimate of the effective number of model parameters for DIC.
whether vectors of log-pointwise-predictive-densities and pointwise contributions to the WAIC effective number of model parameters should be returned.
number of data units to process per batch.
whether to show a progress bar.
if TRUE
the computation is carried out unit-by-unit, which is
slower but uses much less memory.
Other arguments, passed to loo
. Not currently
used by waic.draws
.
whether to compute relative effective sample size estimates
for the likelihood of each observation. This takes more time, but should
result in a better PSIS approximation. See loo
.
how many cores to use.
For compute_DIC
a vector with the deviance information criterion and
effective number of model parameters. For compute_WAIC
a vector with the
WAIC model selection criterion and WAIC effective number of model parameters.
Method waic
returns an object of class waic, loo
, see the
documentation for waic
in package loo.
Method loo
returns an object of class psis_loo
, see
loo
.
D. Spiegelhalter, N. Best, B. Carlin and A. van der Linde (2002). Bayesian Measures of Model Complexity and Fit. Journal of the Royal Statistical Society B 64 (4), 583-639.
S. Watanabe (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning 11, 3571-3594.
A. Gelman, J. Hwang and A. Vehtari (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing 24, 997-1016.
A. Vehtari, A. Gelman and J. Gabry (2015). Pareto smoothed importance sampling. arXiv preprint arXiv:1507.02646.
A. Vehtari, A. Gelman and J. Gabry (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing 27, 1413-1432.
P.-C. Buerkner, J. Gabry and A. Vehtari (2019). Bayesian leave-one-out cross-validation for non-factorizable normal models. arXiv:1810.10559v3.
# NOT RUN {
ex <- mcmcsae_example(n=100)
sampler <- create_sampler(ex$model, data=ex$dat)
sim <- MCMCsim(sampler, burnin=100, n.iter=300, n.chain=4, store.all=TRUE)
compute_DIC(sim)
compute_WAIC(sim)
if (require(loo)) {
waic(sim)
loo(sim, r_eff=TRUE)
}
# }
# NOT RUN {
# }
Run the code above in your browser using DataLab