The "simple" ICC (with both `ppd`

and `adjusted`

set to
`FALSE`

) is calculated by dividing the between-group-variance (random
intercept variance) by the total variance (i.e. sum of between-group-variance
and within-group (residual) variance).
The calculation of the ICC for generalized linear mixed models with binary outcome is based on
Wu et al. (2012). For other distributions (negative binomial, poisson, ...),
calculation is based on Nakagawa et al. 2017, **however**, for
non-Gaussian models it is recommended to compute the adjusted ICC (with
`adjusted = TRUE`

, see below).

**ICC for unconditional and conditional models**

Usually, the ICC is calculated for the null model ("unconditional model").
However, according to Raudenbush and Bryk (2002) or
Rabe-Hesketh and Skrondal (2012) it is also feasible to compute the ICC
for full models with covariates ("conditional models") and compare how
much a level-2 variable explains the portion of variation in the grouping
structure (random intercept).

**ICC for random-slope models**

**Caution:** For models with random slopes and random intercepts,
the ICC would differ at each unit of the predictors. Hence, the ICC for these
kind of models cannot be understood simply as proportion of variance
(see Goldstein et al. 2010). For convenience reasons, as the
`icc()`

function also extracts the different random effects
variances, the ICC for random-slope-intercept-models is reported
nonetheless, but it is usually no meaningful summary of the
proportion of variances.

To get a meaningful ICC also for models with random slopes, use `adjusted = TRUE`

.
The adjusted ICC uses the mean random effect variance, which is based
on the random effect variances for each value of the random slope
(see Johnson et al. 2014).

**ICC for models with multiple or nested random effects**

**Caution:** By default, for three-level-models, depending on the
nested structure of the model, or for models with multiple random effects,
`icc()`

only reports the proportion of variance explained for each
grouping level. Use `adjusted = TRUE`

to calculate the adjusted and
conditional ICC, which condition on *all random effects*.

**Adjusted and conditional ICC**

If `adjusted = TRUE`

, an adjusted and conditional ICC are calculated,
which take all sources of uncertainty (of *all random effects*)
into account to report an "adjusted" ICC, as well as the conditional ICC.
The latter also takes the fixed effects variances into account (see
Nakagawa et al. 2017). If random effects are not nested and not
cross-classified, the adjusted (`adjusted = TRUE`

) and unadjusted
(`adjusted = FALSE`

) ICC are identical. `adjust = TRUE`

returns
a meaningful ICC for models with random slopes. Furthermore, the adjusted
ICC is recommended for models with other distributions than Gaussian.

**ICC for specific group-levels**

To calculate the proportion of variance for specific levels related to each
other (e.g., similarity of level-1-units within
level-2-units or level-2-units within level-3-units) must be computed
manually. Use `get_re_var`

to get the between-group-variances
and residual variance of the model, and calculate the ICC for the various level
correlations.

For example, for the ICC between level 1 and 2:
`sum(get_re_var(fit)) / (sum(get_re_var(fit)) + get_re_var(fit, "sigma_2"))`

or for the ICC between level 2 and 3:
`get_re_var(fit)[2] / sum(get_re_var(fit))`

**ICC for Bayesian models**

If `ppd = TRUE`

, `icc()`

calculates a variance decomposition based on
the posterior predictive distribution. In this case, first, the draws from
the posterior predictive distribution *not conditioned* on group-level
terms (`posterior_predict(..., re.form = NA)`

) are calculated as well
as draws from this distribution *conditioned* on *all random effects*
(by default, unless specified else in `re.form`

) are taken. Then, second,
the variances for each of these draws are calculated. The "ICC" is then the
ratio between these two variances. This is the recommended way to
analyse random-effect-variances for non-Gaussian models. It is then possible
to compare variances accross models, also by specifying different group-level
terms via the `re.form`

-argument.

Sometimes, when the variance of the posterior predictive distribution is
very large, the variance ratio in the output makes no sense, e.g. because
it is negative. In such cases, it might help to use a more robust measure
to calculate the central tendency of the variances. For example, use
`typical = "median"`

.