Learn R Programming

mcmcsae (version 0.6.0)

model-information-criteria: Compute DIC, WAIC and leave-one-out cross-validation model measures

Description

Compute the Deviance Information Criterion (DIC) or Watanabe-Akaike Information Criterion (WAIC) from an object of class draws output by MCMCsim. Method waic.draws computes WAIC using package loo. Method loo.draws also depends on package loo to compute a Pareto-smoothed importance sampling (PSIS) approximation to leave-one-out cross-validation.

Usage

compute_DIC(x, use.pV = FALSE)

compute_WAIC(x, diagnostic = FALSE, batch.size = NULL, show.progress = TRUE)

# S3 method for draws waic(x, by.unit = FALSE, ...)

# S3 method for draws loo(x, r_eff = FALSE, n.cores = 1L, ...)

Arguments

x

an object of class draws.

use.pV

whether half the posterior variance of the deviance should be used as an alternative estimate of the effective number of model parameters for DIC.

diagnostic

whether vectors of log-pointwise-predictive-densities and pointwise contributions to the WAIC effective number of model parameters should be returned.

batch.size

number of data units to process per batch.

show.progress

whether to show a progress bar.

by.unit

if TRUE the computation is carried out unit-by-unit, which is slower but uses much less memory.

...

Other arguments, passed to loo. Not currently used by waic.draws.

r_eff

whether to compute relative effective sample size estimates for the likelihood of each observation. This takes more time, but should result in a better PSIS approximation. See loo.

n.cores

how many cores to use.

Value

For compute_DIC a vector with the deviance information criterion and effective number of model parameters. For compute_WAIC a vector with the WAIC model selection criterion and WAIC effective number of model parameters. Method waic returns an object of class waic, loo, see the documentation for waic in package loo. Method loo returns an object of class psis_loo, see loo.

References

D. Spiegelhalter, N. Best, B. Carlin and A. van der Linde (2002). Bayesian Measures of Model Complexity and Fit. Journal of the Royal Statistical Society B 64 (4), 583-639.

S. Watanabe (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning 11, 3571-3594.

A. Gelman, J. Hwang and A. Vehtari (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing 24, 997-1016.

A. Vehtari, A. Gelman and J. Gabry (2015). Pareto smoothed importance sampling. arXiv preprint arXiv:1507.02646.

A. Vehtari, A. Gelman and J. Gabry (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing 27, 1413-1432.

P.-C. Buerkner, J. Gabry and A. Vehtari (2019). Bayesian leave-one-out cross-validation for non-factorizable normal models. arXiv:1810.10559v3.

Examples

Run this code
# NOT RUN {
ex <- mcmcsae_example(n=100)
sampler <- create_sampler(ex$model, data=ex$dat)
sim <- MCMCsim(sampler, burnin=100, n.iter=300, n.chain=4, store.all=TRUE)
compute_DIC(sim)
compute_WAIC(sim)
if (require(loo)) {
  waic(sim)
  loo(sim, r_eff=TRUE)
}
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab