brms (version 1.4.0)

WAIC.brmsfit: Compute the WAIC

Description

Compute the widely applicable information criterion (WAIC) based on the posterior likelihood using the loo package.

Usage

"WAIC"(x, ..., compare = TRUE, newdata = NULL, re_formula = NULL, allow_new_levels = FALSE, subset = NULL, nsamples = NULL, pointwise = NULL)
WAIC(x, ...)

Arguments

x
A fitted model object typically of class brmsfit.
...
Optionally more fitted model objects.
compare
A flag indicating if the information criteria of the models should be compared to each other via compare_ic.
newdata
An optional data.frame for which to evaluate predictions. If NULL (default), the orginal data of the model is used.
re_formula
formula containing group-level effects to be considered in the prediction. If NULL (default), include all group-level effects; if NA, include no group-level effects.
allow_new_levels
A flag indicating if new levels of group-level effects are allowed (defaults to FALSE). Only relevant if newdata is provided.
subset
A numeric vector specifying the posterior samples to be used. If NULL (the default), all samples are used.
nsamples
Positive integer indicating how many posterior samples should be used. If NULL (the default) all samples are used. Ignored if subset is not NULL.
pointwise
A flag indicating whether to compute the full log-likelihood matrix at once or separately for each observation. The latter approach is usually considerably slower but requires much less working memory. Accordingly, if one runs into memory issues, pointwise = TRUE is the way to go. By default, pointwise is automatically chosen based on the size of the model.

Value

If just one object is provided, an object of class ic. If multiple objects are provided, an object of class iclist.

Methods (by class)

  • brmsfit: WAIC method for brmsfit objects

Details

When comparing models fitted to the same data, the smaller the WAIC, the better the fit. For brmsfit objects, waic is an alias of WAIC.

References

Vehtari, A., Gelman, A., & Gabry J. (2016). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. In Statistics and Computing, doi:10.1007/s11222-016-9696-4. arXiv preprint arXiv:1507.04544.

Gelman, A., Hwang, J., & Vehtari, A. (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing, 24, 997-1016.

Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. The Journal of Machine Learning Research, 11, 3571-3594.

Examples

Run this code
## Not run: 
# # model with population-level effects only
# fit1 <- brm(rating ~ treat + period + carry,
#             data = inhaler, family = "gaussian")
# WAIC(fit1)
# 
# # model with an additional varying intercept for subjects
# fit2 <- brm(rating ~ treat + period + carry + (1|subject),
#             data = inhaler, family = "gaussian")
# # compare both models
# WAIC(fit1, fit2)                          
# ## End(Not run)

Run the code above in your browser using DataLab