The following tests are implemented:
- Approach suggested by Klesel2019;textualcSEM
The model-implied variance-covariance matrix (either indicator
(.type_vcv = "indicator"
) or construct (.type_vcv = "construct"
))
is compared across groups.
To measure the distance between the model-implied variance-covariance matrices,
the geodesic distance (dG) and the squared Euclidean distance (dL) are used.
If more than two groups are compared, the average distance over all groups
is used.
- Approach suggested by Sarstedt2011;textualcSEM
Groups are compared in terms of parameter differences across groups.
Sarstedt2011;textualcSEM tests if parameter k is equal
across all groups. If several parameters are tested simultaneously
it is recommended to adjust the significance level or the p-values (in cSEM correction is
done by p-value). By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details. Note: the
test has some severe shortcomings. Use with caution.
- Approach suggested by Chin2010;textualcSEM
Groups are compared in terms of parameter differences across groups.
Chin2010;textualcSEM tests if parameter k is equal
between two groups. If more than two groups are tested for equality, parameter
k is compared between all pairs of groups. In this case, it is recommended
to adjust the significance level or the p-values (in cSEM correction is
done by p-value) since this is essentially a multiple testing setup.
If several parameters are tested simultaneously, correction is by group
and number of parameters. By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details.
- Approach suggested by Keil2000;textualcSEM
Groups are compared in terms of parameter differences across groups.
Keil2000;textualcSEM tests if parameter k is equal
between two groups. It is assumed, that the standard errors of the coefficients are
equal across groups. The calculation of the standard error of the parameter
difference is adjusted as proposed by Henseler2009;textualcSEM.
If more than two groups are tested for equality, parameter k is compared
between all pairs of groups. In this case, it is recommended
to adjust the significance level or the p-values (in cSEM correction is
done by p-value) since this is essentially a multiple testing setup.
If several parameters are tested simultaneously, correction
is by group and number of parameters. By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details.
- Approach suggested by Nitzl2010;textualcSEM
Groups are compared in terms of parameter differences across groups.
Similarly to Keil2000;textualcSEM, a single parameter k is tested
for equality between two groups. In contrast to Keil2000;textualcSEM,
it is assumed, that the standard errors of the coefficients are
unequal across groups Sarstedt2011cSEM.
If more than two groups are tested for equality, parameter k is compared
between all pairs of groups. In this case, it is recommended
to adjust the significance level or the p-values (in cSEM correction is
done by p-value) since this is essentially a multiple testing setup.
If several parameters are tested simultaneously, correction
is by group and number of parameters. By default
no multiple testing correction is done, however, several common
adjustments are available via .approach_p_adjust
. See
stats::p.adjust()
for details.
- Approach suggested by Henseler2007a;textualcSEM and Henseler2009;textualcSEM
This approach is also known as PLS-MGA Henseler2009,Sarstedt2011cSEM.
It tests whether a population parameter of group 1 is larger than or equal to
the population parameter of group 2. In doing so, we make a comparison between all the bias-corrected
bootstrap estimates of group 1 with group 2. The outcome is an estimated probability.
The decision is based on whether this probability is smaller than .alpha
or larger than 1 - .alpha
.
Therefore, two null hypotheses are tested, namely H_0: theta_1 <= theta_2
and H_0: theta_1 >= theta_2. As a consequence, it is currently not possible to
adjust the p-value in case of multiple comparisons, i.e., .approach_p_adjust
is ignored.
Use .approach_mgd
to choose the approach. By default all approaches are computed
(.approach_mgd = "all"
).
By default, approaches based on parameter differences across groups compare
all parameters (.parameters_to_compare = NULL
). To compare only
a subset of parameters provide the parameters in lavaan model syntax just like
the model to estimate. Take the simple model:
model_to_estimate <- "
Structural model
eta2 ~ eta1
eta3 ~ eta1 + eta2# Each concept os measured by 3 indicators, i.e., modeled as latent variable
eta1 =~ y11 + y12 + y13
eta2 =~ y21 + y22 + y23
eta3 =~ y31 + y32 + y33
"
If only the path from eta1 to eta3 and the loadings of eta1 are to be compared
across groups, write:
to_compare <- "
Structural parameters to compare
eta3 ~ eta1# Loadings to compare
eta1 =~ y11 + y12 + y13
"
Note that the "model" provided to .parameters_to_compare
does not have to be an estimable model!
Note also that compared to all other functions in cSEM using the argument,
.handle_inadmissibles
defaults to "replace"
to accomdate the Sarstedt et al. (2011) approach.
Argument .R_permuation
is ignored for the "Nitzl"
and the "Keil"
approach.
.R_bootstrap
is ignored if .object
already contains resamples,
i.e. has class cSEMResults_resampled
and if only the "Klesel"
or the "Chin"
approach are used.
The argument .saturated
is used by "Klesel"
only. If .saturated = TRUE
the original structural model is ignored and replaced by a saturated model,
i.e. a model in which all constructs are allowed to correlate freely.
This is useful to test differences in the measurement models between groups
in isolation.