Compute the Deviance Information Criterion (DIC) or
Watanabe-Akaike Information Criterion (WAIC) from an
object of class mcdraws
output by MCMCsim
.
Method waic.mcdraws
computes WAIC using package loo.
Method loo.mcdraws
also depends on package loo to compute
a Pareto-smoothed importance sampling (PSIS) approximation
to leave-one-out cross-validation.
compute_DIC(x, use.pV = FALSE)compute_WAIC(
x,
diagnostic = FALSE,
batch.size = NULL,
show.progress = TRUE,
cl = NULL,
n.cores = 1L
)
# S3 method for mcdraws
waic(x, by.unit = FALSE, ...)
# S3 method for mcdraws
loo(x, by.unit = FALSE, r_eff = FALSE, n.cores = 1L, ...)
For compute_DIC
a vector with the deviance information criterion and
effective number of model parameters. For compute_WAIC
a vector with the
WAIC model selection criterion and WAIC effective number of model parameters.
Method waic
returns an object of class waic, loo
, see the
documentation for waic
in package loo.
Method loo
returns an object of class psis_loo
, see
loo
.
an object of class mcdraws
.
whether half the posterior variance of the deviance should be used as an alternative estimate of the effective number of model parameters for DIC.
whether vectors of log-pointwise-predictive-densities and pointwise contributions to the WAIC effective number of model parameters should be returned.
number of data units to process per batch.
whether to show a progress bar.
an existing cluster can be passed for parallel computation. If cl
is provided,
n.cores
will be set to the number of workers in that cluster. If NULL
and
n.cores > 1
, a new cluster is created.
the number of cpu cores to use. Default is one, i.e. no parallel computation.
if TRUE
the computation is carried out unit-by-unit, which is
slower but uses much less memory.
Other arguments, passed to loo
. Not currently
used by waic.mcdraws
.
whether to compute relative effective sample size estimates
for the likelihood of each observation. This takes more time, but should
result in a better PSIS approximation. See loo
.
D. Spiegelhalter, N. Best, B. Carlin and A. van der Linde (2002). Bayesian Measures of Model Complexity and Fit. Journal of the Royal Statistical Society B 64 (4), 583-639.
S. Watanabe (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning 11, 3571-3594.
A. Gelman, J. Hwang and A. Vehtari (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing 24, 997-1016.
A. Vehtari, A. Gelman and J. Gabry (2015). Pareto smoothed importance sampling. arXiv:1507.02646.
A. Vehtari, A. Gelman and J. Gabry (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing 27, 1413-1432.
P.-C. Buerkner, J. Gabry and A. Vehtari (2020). Efficient leave-one-out cross-validation for Bayesian non-factorized normal and Student-t models. arXiv:1810.10559.
# \donttest{
ex <- mcmcsae_example(n=100)
sampler <- create_sampler(ex$model, data=ex$dat)
sim <- MCMCsim(sampler, burnin=100, n.iter=300, n.chain=4, store.all=TRUE)
compute_DIC(sim)
compute_WAIC(sim)
if (require(loo)) {
waic(sim)
loo(sim, r_eff=TRUE)
}
# }
Run the code above in your browser using DataLab