Proposed by Stone (1979) the BIC (Bayesian Information Criterion) measures the quality of the adjustment made by the model, when comparing adjusted models with the same data, the smaller the BIC the better the adjustment.
The BIC theory requires that the log-likelihood has been maximized, but as we are in the context of Bayesian statistics, the log-likelihood as explained in the logLik.bayesbr
is made with the average of the a priori distribution for each theta and applying this value in the formula to calculate the loglik.
The BIC is calculated by
$$ BIC = log(n)*k - 2 * L ,$$
where n
is the number of observations of the model variables, k
is the number of covariates used in the model, and L is the average of the loglik chain returned by the function logLik.bayesbr
.