Learn R Programming

semTools (version 0.5-8)

compRelSEM: Composite Reliability using SEM

Description

Calculate composite reliability from estimated factor-model parameters

Usage

compRelSEM(object, W = NULL, return.total = FALSE, obs.var = TRUE,
  tau.eq = FALSE, ord.scale = TRUE, shared = character(0),
  config = character(0), add.IRR = FALSE, higher = character(0),
  true = list(), dropSingle = TRUE, omit.factors = character(0),
  omit.indicators = character(0), omit.imps = c("no.conv", "no.se"),
  simplify = FALSE, return.df = simplify)

Value

By default (simplify=FALSE) a list of numeric vectors (1 per composite) is returned. In multigroup CFA, the vector contains a reliability index for each group in which the composite can be computed. Each composite's vector has a attr(..., "header") with information to facilitate interpretation of that index:

  • A list of variables in the composite, which determines the composite's total variance (denominator of reliability)

  • Whether that total variance (denominator) is determined from the restricted model (i.e., CFA parameters) or unrestricted model (i.e., a freely estimated covariance matrix)

  • Whether the variables in the composite are (a transformation of) observed variables, or whether they are latent (components of) variables. The latter (e.g., latent responses assumed to underlie observed ordinal indicators, or latent level-specific components of variables in a multilevel CFA) cannot be used to calculated an observed composite variable, so the resulting coefficient should be cautiously interpreted as a "hypothetical reliability" (Chalmers, 2018; Lai, 2021).

  • The latent variables that contribute common-factor variance to the composite, which determine the composite's "true-score" variance (numerator of reliability)

  • Which reliability formula was used: model-based reliability (so-called "omega") or coefficient alpha (a model-free lower-bound estimate of true reliability, equivalent to a model-based reliability that assumes tau-equivalence)

This header will be printed immediately above each composite's reliability coefficient. When multiple reliability coefficients are returned, and each vector in the list has the same length, then setting simplify=TRUE will collect the list of single

coefficients into a vector, or the list of multiple coefficients into a data.frame, and their headers will be concatenated to be printed above the coefficients. Setting simplify = -1L (or any negative number) will omit the informative headers.

Arguments

object

A lavaan::lavaan or lavaan.mi::lavaan.mi object, expected to contain only exogenous common factors (i.e., a CFA model).

W

Composite weights applied to observed variables prior to summing. By default (NULL), unit-weights are applied to all indicators per factor (as well as all modeled indicators when return.total=TRUE), which is equivalent to specifying equal weights of any value to each indicator. Weights can be a character string specifying any number of composites using lavaan::model.syntax(), in the form COMPOSITE <~ weight*indicator (any indicator without a numeric weight is given a unit weight = 1). See Details and Examples about complicated CFAs (e.g., multilevel, higher-order, or bifactor).

return.total

For multidimensional CFAs, this logical value indicates whether to return a final index for the reliability of a composite of all modeled indicators (labeled .TOTAL.). This is redundant whenever there is already a common factor indicated by all items (e.g., the general factor in a bifactor model). This argument is ignored when using the W= argument to specify composites (optionally with weights). Setting a negative value (e.g., -1) returns only the .TOTAL. composite reliability (i.e., excluding coefficients per factor).

obs.var

logical indicating whether to compute reliability using observed (co)variances to compute the denominator. Setting FALSE triggers using model-implied (co)variances to compute the denominator.

tau.eq

logical indicating whether to assume (essential) tau-equivalence by calculating coefficient \(\alpha\) (on observed or model-implied (co)variances, depending on obs.var=). Triggers error if requested in combination with unequal weights in W=. Setting FALSE (default) yields an "\(\omega\)"-type coefficient. Optionally, a character vector of composite names can specify calculating coefficient \(\alpha\) for a subset of all composites.

ord.scale

logical relevant only for composites of discrete items. Setting TRUE (default) applies Green and Yang's (2009, formula 21) method to calculate reliability of the actual composite (i.e., on the actual ordinal response scale). Setting FALSE yields coefficients that are only interpretable on the continuous latent-response scale, which can be interpreted as the upper bound of reliability if items were more approximately continuous. Ignored for factors with continuous indicators. Reliability cannot currently be calculated for composites of both discrete and continuous indicators.

shared

character vector of composite names, to be interpreted as representing (perhaps multidimensional) shared construct(s). Lai's (2021) coefficient \(\omega^\textrm{B}\) or \(\alpha^\textrm{B}\) is calculated to quantify reliability relative to error associated with both indicators (measurement error) and subjects (sampling error), like a generalizability coefficient. For purely scale reliability (relative to item/measurement error alone, i.e., Lai's \(\omega^\textrm{2L}\)), omit the composite(s) from the shared= argument.

config

Deprecated character vector.

add.IRR

logical indicating whether to calculate an additional reliability coefficient for any composite listed in shared=. Given that subjects can be considered as raters of their cluster's shared construct, an interrater reliability (IRR) coefficient can quantify reliability relative to rater/sampling error alone.

higher

Deprecated, supplanted by using the true= argument.

true

Optional list of character vectors, with list-element names corresponding to composite names. Each composite can have a character vector with names of any common factor(s) that should be considered the source(s) of "true-score variance" in that composite. For any composite with a specification in true=, the default is to consider all common factors to contribute true-score variance to any items in the composite. Specifying a composite in true= is only necessary to deviate from this default, for example, to specify the "general" factor in a bifactor model, in order to obtain "hierarchical omega" (\(\omega_\textrm{H}\)). A shortcut for this is available when W=NULL, by specifying a single character string (one of "omegaH", "omega.h", or "omega_h") instead of a list.

dropSingle

When W=NULL, this logical indicates whether to exclude single-indicator factors from the list of default composites. Even when TRUE (default), single indicators are still included in the .TOTAL. composite when return.total = TRUE.

omit.factors

Deprecated, supplanted by using the true= argument.

omit.indicators

Deprecated, supplanted by using the W= argument.

omit.imps

character vector specifying criteria for omitting imputations from pooled results (using lavaan.mi::lavaan.mi). Can include any of c("no.conv", "no.se", "no.npd"), the first 2 of which are the default setting, which excludes any imputations that did not converge or for which standard errors could not be computed. The last option ("no.npd") would exclude any imputations which yielded a nonpositive definite covariance matrix for observed or latent variables, which would include any "improper solutions" such as Heywood cases. NPD solutions are not excluded by default because they are likely to occur due to sampling error, especially in small samples. However, gross model misspecification could also cause NPD solutions. Users can compare pooled results with and without this setting as a sensitivity analysis to see whether some imputations warrant further investigation.

simplify

logical indicating whether to return reliability coefficients in a numeric vector (for single-group model) or data.frame (one row per group, or per level in some cases). Specifying a negative number (simplify = -1L) additionally removes the informative headers printed to facilitate interpretation.

return.df

Deprecated logical argument, replaced by simplify=.

Author

Terrence D. Jorgensen (University of Amsterdam; TJorgensen314@gmail.com)

Uses hidden functions to implement Green & Yang's (2009) reliability for categorical indicators, written by Sunthud Pornprasertmanit (psunthud@gmail.com) for the deprecated reliability() function.

Details

Several coefficients for factor-analysis reliability have been termed "omega", which Cho (2021) argues is a misleading misnomer and argues for using \(\rho\) to represent them all, differentiated by descriptive subscripts. In our package, we strive to provide unlabeled coefficients, leaving it to the user to decide on a label in their report. But we do use the symbols \(\alpha\) and \(\omega\) in the formulas below in order to distinguish coefficients that do (not) assume essential tau-equivalence.

Bentler (1968) first introduced factor-analysis reliability for a unidimensional factor model with congeneric indicators, labeling the coefficients \(\alpha\). McDonald (1999) later referred to this and other reliability coefficients, first as \(\theta\) (in 1970), then as \(\omega\), which is a source of confusion when reporting coefficients (Cho, 2021). Coefficients based on factor models were later generalized to account for multidimenisionality (possibly with cross-loadings) and correlated errors. The general \(\omega\) formula implemented in this function is:

$$\omega=\frac{\bold{w}^{\prime} \Lambda \Phi \Lambda^{\prime} \bold{w} }{ \bold{w}^{\prime} \hat{\Sigma} \bold{w} }, $$

where \(\hat{\Sigma}\) can be the model-implied covariance matrix from either the saturated model (i.e., the "observed" covariance matrix, used by default) or from the hypothesized CFA model, controlled by the obs.var= argument. All elements of matrices in the numerator and denominator are effectively summed by the multiplication of the outer terms \(\bold{w}\), a \(k\)-dimensional vector of composite weights typically consisting of \(\bold{1}\)s, unless otherwise specified with the W= argument), and \(k\) is the number of variables in the composite. Reliability of subscale composites (or simply for separate factors in a joint CFA) can be calculated by setting omitted-indicator weights to 0. For unidimensional constructs with simple structure, the equation above is often simplified to a scalar representation (e.g., McDonald, 1999, Eq. 6.20b):

$$ \omega = \frac{ \left( \sum^{k}_{i = 1} \lambda_i \right)^{2} Var\left( \psi \right) }{ \left( \sum^{k}_{i = 1} \lambda_i \right)^{2} Var\left( \psi \right) + \sum^{k}_{i = 1} \theta_{ii} }, $$

Note that all coefficients are calculated from total factor variances: lavInspect(object, "cov.lv"), which assumes the fitted object= is a CFA, not a full SEM with latent regression slopes. If there is a Beta matrix, it should only contain higher-order factor loadings (see details below).

When the fitted CFA imposes constraints consistent with (essential) tau-equivalence, \(\omega\) is equivalent to coefficient \(\alpha\) (Cronbach, 1951):

$$ \alpha = \frac{k}{k - 1}\left[ 1 - \frac{ \textrm{tr} \left( \hat{\Sigma} \right) }{ \bold{1}^{\prime} \hat{\Sigma} \bold{1} } \right],$$

where \(\textrm{tr} \left( . \right)\) is the trace operation (i.e., the sum of diagonal elements). Setting tau.eq=TRUE triggers the application of this formula (rather than \(\omega\) above) to the model-implied or observed covariance matrix (again controlled by the obs.var= argument).

Higher-Order Factors:

For higher-order constructs with latent indicators, only \(\omega\) is available because \(\alpha\) was not derived from CFA parameters (although it can be expressed in a particular restricted CFA specification).

The reliability of a composite that represents a higher-order construct requires partitioning the model-implied factor covariance matrix \(\Phi\) in order to isolate the common-factor variance associated only with the higher-order factor. Using a second-order factor model, the model-implied covariance matrix of observed indicators \(\hat{\Sigma}\) can be partitioned into 3 sources:

  1. the second-order common-factor (co)variance: \(\Lambda \bold{B} \Phi_2 \bold{B}^{\prime} \Lambda^{\prime}\)

  2. the residual variance of the first-order common factors (i.e., not accounted for by the second-order factor): \(\Lambda \Psi_{u} \Lambda^{\prime}\)

  3. the measurement error of observed indicators: \(\Theta\)

where \(\Lambda\) contains first-order factor loadings, \(\bold{B}\) contains second-order factor loadings, \(\Phi_2\) is the model-implied covariance matrix of the second-order factor(s), and \(\Psi_{u}\) is the covariance matrix of first-order factor disturbances. In practice, we can use the full \(\bold{B}\) matrix and full model-implied \(\Phi\) matrix (i.e., including all latent factors) because the zeros in \(\bold{B}\) will cancel out unwanted components of \(\Phi\). Thus, we can calculate the proportion of variance of a composite score that is attributable to the second-order factor:

$$\omega=\frac{\bold{w}^{\prime} \Lambda \bold{B} \Phi \bold{B}^{\prime} \Lambda^{\prime} \bold{w} }{ \bold{w}^{\prime} \hat{\Sigma} \bold{w}}, $$

where \(\bold{w}\), \(\hat{\Sigma}\), and \(k\) are defined as above. Note that if a higher-order factor also has observed indicators, it is necessary to model the observed indicators as single-indicator lower-order constructs, so that all of the higher-order factor indicators are latent (with loadings in the Beta matrix, not Lambda); otherwise, higher-order factor variance in the observed indicator is not captured in the numerator.

Bifactor or Multitrait--Multimethod (MTMM) Models:

These multidimensional models partition sources of common variance that are due to the factor of interest (e.g., a trait) as well as non-target factors (e.g., "method factors", such as item wording or type of respondent). The latter can be considered as systematic (i.e., non-random) sources of error, to be excluded from the numerator of a reliability coefficient, yielding so-called "hierarchical omega" (\(\omega_\textrm{H}\)). On the other hand, non-target variance that can be expected in repeated measurement meets the classical test theory definition of reliability. Including method factors in the numerator yields so-called "omega total" (\(\omega_\textrm{T}\)), which is the default approach in compRelSEM() because it is consistent with the classical test theory definition of reliability. However, users can obtain \(\omega_\textrm{H}\) for a composite by using the true= argument to specify any factor(s) to be treated as representing true scores. The same approach can be taken to obtain the proportion of a (sub)scale composite's variance due to method factors (by listing those in true=), if that is of interest.

Categorical Indicators:

When all indicators (per composite) are ordinal, a CFA can be fitted that includes a threshold model (sometimes called Item Factor Analysis: IFA), which assumes a normally distributed latent response underlies each observed ordinal response. Despite making this assumption, a composite of ordinal items can only be calculated by assigning numerical values to the ordinal categories, so that the pseudo-numerical variables can be summed into a composite variable that is more approximately continuous than its items.

Applying the formulas above to IFA parameters provides the hypothetical reliability of a composite of latent responses: a composite which cannot be calculated in practice. Nonetheless, this hypothetical reliability can be interpreted as an estimate of what reliability could be if a more approximately continuous response scale were used (e.g., with sufficiently many response categories that the standardized solutions are equivalent between a fitted IFA and a fitted CFA that treats the ordinal responses as numeric; Chalmers, 2018). This can be requested by setting ord.scale=FALSE, in which case \(\hat\Sigma\) in the formulas above is a polychoric correlation matrix. When ord.scale=FALSE and tau.eq=TRUE, this results in what Zumbo et al. (2007) termed "ordinal \(\alpha\)" (see criticisms by Chalmers, 2018, and and a rejoinder by Zumbo & Kroc, 2019).

Alternatively, Green and Yang (2009, Eq. 21) derived a method to calculate model-based reliability (\(\omega\)) from IFA parameters (i.e., incorporating the latent-response assumption) but that applies to the actual (i.e., ordinal) observed response scale (the default: ord.scale=TRUE). Lu et al. (2020) showed how to incorporate unequal weights into Green and Yang's (2009) formula, so W= can be used to estimate the (maximal) reliability of a weighted composite of ordinal variables. However, combining ord.scale=TRUE with tau.eq=TRUE is not available. For \(\alpha\) to be interpretable on the observed ordinal scale, users must choose whether to (a) release the latent-response assumption, by fitting a CFA without a threshold model, or (b) fit an IFA model with constraints consistent with the assumption of (essential) tau-equivalence (i.e., equal factor loadings).

No method analogous to Green and Yang (2009, Eq. 21) has yet been proposed to calculate reliability with a mixture of categorical and continuous indicators, so any such composite is skipped with a warning.

Multilevel Measurement Models:

How to define reliability coefficients for scales employed in nested designs is an ongoing topic of methodological development, with some ongoing controversies about best practice when the target of measurement is the "cluster" or between-level (i.e., Level 2 in a 2-level design). Geldhof et al. (2014) proposed applying the standard formulas above to each level's CFA parameters and/or (model-implied) covariance matrix, whereas Lai (2021) proposed different formulas that account for all sources of variance in composites of observed variables.

There is no controversy about how to define a within-level reliability, coefficient, which can be interpreted as the reliability of a composite calculated by first centering each indicator around its cluster mean, then calculating the composite from the cluster-mean-centered items. Equivalently (i.e., the same formula), this can be interpreted as the hypothetical reliability of a composite of the items' latent Level-1 components. This coefficient can be requested with lavaan::model.syntax (to pass to the W= argument) that specifies a composite in a Level-1 "block", which not have the same name as any composite in the Level-2 block. If users do not use W= (i.e., calculate a reliability index per modeled common factor), then this can be accomplished by using unique factor names across levels.

This contrasts with reliability indices for between-level composites: The reliability of a hypothetical composite of items' latent between-level components (using formulas proposed by Geldhof et al., 2014) is not equivalent to the coefficient for a composite of items' observed cluster means, using generalizations of formulas proposed by Lai (2021):

$$ \omega^\textrm{B} = \frac{\bold{w}^{\prime} \Lambda^\textrm{B} \Phi^\textrm{B} \Lambda^{\textrm{B}\prime} \bold{w} }{ \bold{w}^{\prime} \hat{\Sigma}^\textrm{B} \bold{w} + \frac{1}{\tilde{n}_\textrm{clus}} \left( \bold{w}^{\prime} \hat{\Sigma}^\textrm{W} \bold{w} \right) }, $$

$$ \alpha^\textrm{B} = \frac{2k}{k - 1}\left[ \frac{ \sum^{k}_{i=2} \sum^{i-1}_{j=1} \hat\sigma^\textrm{B}_{ij} }{ \bold{1}^{\prime} \hat\Sigma^\textrm{B} \bold{1} + \frac{1}{\tilde{n}_\textrm{clus}} \left( \bold{1}^{\prime} \hat\Sigma^\textrm{W} \bold{1} \right) } \right],$$

where \(\tilde{n}_\textrm{clus}\) is the harmonic-mean cluster size, and superscripts B and W indicate between- and within-level parameters. Obtaining these estimates of composite reliability requires fitting a 2-level CFA that provides the same factor structure and factor names in the models at both levels (following the advice of Jak et al., 2021), as well as the same composite name in both levels/blocks of syntax passed to W= (if used). Furthermore, the between-level composite name must be passed to the shared= argument; otherwise, the same factor/composite name across levels will yield Lai's (2021) coefficient for a configural construct (see Examples):

$$ \omega^\textrm{2L} = \frac{\bold{w}^{\prime} \left( \Lambda^\textrm{W} \Phi^\textrm{W} \Lambda^{\textrm{W}\prime} + \Lambda^\textrm{B} \Phi^\textrm{B} \Lambda^{\textrm{B}\prime} \right) \bold{w} }{ \bold{w}^{\prime} \hat\Sigma^\textrm{B} \bold{w} + \bold{w}^{\prime} \hat\Sigma^\textrm{W} \bold{w} }, $$

$$ \alpha^\textrm{2L} = \frac{2k}{k - 1}\left[ \frac{ \sum^{k}_{i=2} \sum^{i-1}_{j=1} \left( \hat\sigma^\textrm{W}_{ij} + \hat\sigma^\textrm{B}_{ij} \right) }{ \bold{1}^{\prime} \hat\Sigma^\textrm{B} \bold{1} + \bold{1}^{\prime} \hat\Sigma^\textrm{W} \bold{1} } \right],$$

This can be interpreted as the scale-reliability coefficient ignoring the nested design, as both the common-factor variance of the Level-1 factor and of its Level-2 cluster means are treated as true-score variance.

Note that Lai's (2021) between-level reliability coefficients for a shared construct quantify generalizability across both indicators and raters (i.e., subjects rating their cluster's construct). Lüdtke et al. (2011) refer to these as measurement error and sampling error, respectively. From this perspective (and following from generalizability theory), an IRR coefficient can also be calculated:

$$ \textrm{IRR} = \frac{\bold{w}^{\prime} \left( \hat{\Sigma}^\textrm{B} \right) \bold{w} }{ \bold{w}^{\prime} \hat\Sigma^\textrm{B} \bold{w} + \bold{w}^{\prime} \hat\Sigma^\textrm{W} \bold{w} }, $$

which quantifies generalizability across rater/sampling-error only, and can be returned for any shared= construct's composite by setting add.IRR=TRUE.

References

Bentler, P. M. (1968). Alpha-maximized factor analysis (alphamax): Its relation to alpha and canonical factor analysis. Psychometrika, 33(3), 335--345. tools:::Rd_expr_doi("10.1007/BF02289328")

Chalmers, R. P. (2018). On misconceptions and the limited usefulness of ordinal alpha. Educational and Psychological Measurement, 78(6), 1056--1071. tools:::Rd_expr_doi("10.1177/0013164417727036")

Cho, E. (2021) Neither Cronbach’s alpha nor McDonald’s omega: A commentary on Sijtsma and Pfadt. Psychometrika, 86(4), 877--886. tools:::Rd_expr_doi("10.1007/s11336-021-09801-1")

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297--334. tools:::Rd_expr_doi("10.1007/BF02310555")

Geldhof, G. J., Preacher, K. J., & Zyphur, M. J. (2014). Reliability estimation in a multilevel confirmatory factor analysis framework. Psychological Methods, 19(1), 72--91. tools:::Rd_expr_doi("10.1037/a0032138")

Green, S. B., & Yang, Y. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 74(1), 155--167. tools:::Rd_expr_doi("10.1007/s11336-008-9099-3")

Jak, S., Jorgensen, T. D., & Rosseel, Y. (2021). Evaluating cluster-level factor models with lavaan and Mplus. Psych, 3(2), 134--152. tools:::Rd_expr_doi("10.3390/psych3020012")

Lai, M. H. C. (2021). Composite reliability of multilevel data: It’s about observed scores and construct meanings. Psychological Methods, 26(1), 90--102. tools:::Rd_expr_doi("10.1037/met0000287")

Lu, Z., Hong, M., & Kim, S. (2020). Formulas of multilevel reliabilities for tests with ordered categorical responses. In M. Wiberg, D. Molenaar, J. González, U.Böckenholt, & J.-S. Kim (Eds.), Quantitative psychology: The 85th annual meeting of the Psychometric Society, Virtual (pp. 103--112). Springer. tools:::Rd_expr_doi("10.1007/978-3-030-74772-5_10")

Lüdtke, O., Marsh, H. W., Robitzsch, A., & Trautwein, U. (2011). A 2 \(\times\) 2 taxonomy of multilevel latent contextual models: Accuracy--bias trade-offs in full and partial error correction models. Psychological Methods, 16(4), 444--467. tools:::Rd_expr_doi("10.1037/a0024376")

McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum.

Zumbo, B. D., Gadermann, A. M., & Zeisser, C. (2007). Ordinal versions of coefficients alpha and theta for Likert rating scales. Journal of Modern Applied Statistical Methods, 6(1), 21--29. tools:::Rd_expr_doi("10.22237/jmasm/1177992180")

Zumbo, B. D., & Kroc, E. (2019). A measurement is a choice and Stevens’ scales of measurement do not help make it: A response to Chalmers. Educational and Psychological Measurement, 79(6), 1184--1197. tools:::Rd_expr_doi("10.1177/0013164419844305")

See Also

maximalRelia() for the maximal reliability of weighted composite

Examples

Run this code
data(HolzingerSwineford1939)
HS9 <- HolzingerSwineford1939[ , c("x7","x8","x9")]
HSbinary <- as.data.frame( lapply(HS9, cut, 2, labels=FALSE) )
names(HSbinary) <- c("y7","y8","y9")
HS <- cbind(HolzingerSwineford1939, HSbinary)

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ y7 + y8 + y9 '

fit  <- cfa(HS.model, data = HS, ordered = c("y7","y8","y9"), std.lv = TRUE)
fitg <- cfa(HS.model, data = HS, ordered = c("y7","y8","y9"), std.lv = TRUE,
            group = "school")

## works for factors with exclusively continuous OR categorical indicators
compRelSEM(fit)
compRelSEM(fitg)

## reliability for composite of ALL indicators only available when they are
## all continuous or all categorical.  The example below calculates a
## composite of continuous items from 2 factors (visual and textual)
## using the custom-weights syntax (note the "<~" operator)
w.tot <- '
  visual  <~ x1 + x2 + x3
  textual <~                x4 + x5 + x6
  total   <~ x1 + x2 + x3 + x4 + x5 + x6
'
compRelSEM(fit, W = w.tot)


## ----------------------
## Higher-order construct
## ----------------------

## Reliability of a composite that represents a higher-order factor
mod.hi <- ' visual  =~ x1 + x2 + x3
            textual =~ x4 + x5 + x6
            speed   =~ x7 + x8 + x9
            general =~ visual + textual + speed '

fit.hi <- cfa(mod.hi, data = HolzingerSwineford1939)
## "general" is the factor representing "true scores", but it has no
## observed indicators.  Must use custom-weights syntax:
compRelSEM(fit.hi, W = 'g <~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9')


## ----------------------
## Hierarchical omega
## and omega Total
## ----------------------

mod.bi <- ' visual  =~ x1 + x2 + x3
            textual =~ x4 + x5 + x6
            speed   =~ x7 + x8 + x9
            general =~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 '
fit.bi <- cfa(mod.bi, data = HolzingerSwineford1939,
              orthogonal = TRUE, std.lv = TRUE)
compRelSEM(fit.bi, return.total = -1) # omega_Total
compRelSEM(fit.bi, return.total = -1, # omega_Hierarchical
           true = list(.TOTAL. = "general"))


## ----------------------
## Multilevel Constructs
## ----------------------

## Same factor structure with metric invariance across levels (Jak et al., 2021)
model2 <- '
  level: 1
    f1 =~ y1 + L2*y2 + L3*y3
    f2 =~ y4 + L5*y5 + L6*y6
  level: 2
    f1 =~ y1 + L2*y2 + L3*y3
    f2 =~ y4 + L5*y5 + L6*y6
'
fit2 <- sem(model2, data = Demo.twolevel, cluster = "cluster")

## Lai's (2021, Eq. 13) omega index for a configural (Level-1) construct,
## treating common-factor variance at both levels as "true" variance
compRelSEM(fit2)

## Lai's (2021, Eq. 17) omega index for a shared (Level-2) construct
## (also its interrater reliability coefficient)
compRelSEM(fit2, shared = c("f1","f2"), add.IRR = TRUE)

## Geldhof et al.'s (2014) level-specific indices imply a different
## composite (hypothetically) calculated per level.  Thus, use
## unique composite names per level.

W2.Geldhof <- ' level: 1
  F1w <~ y1 + y2 + y3
  F2w <~ y4 + y5 + y6
level: 2
  F1b <~ y1 + y2 + y3
  F2b <~ y4 + y5 + y6
'
compRelSEM(fit2, W = W2.Geldhof)


Run the code above in your browser using DataLab