cSEM (version 0.1.0)

testMGD: Tests for multi-group comparisons

Description

This function performs several permutation tests, i.e., the reference distribution of the test statistic is obtained by permutation.

Usage

testMGD(
 .object                = NULL,
 .alpha                 = 0.05,
 .approach_p_adjust     = "none",
 .approach_mgd          = c("all", "Klesel", "Chin", "Sarstedt", 
                            "Keil", "Nitzl", "Henseler"),
 .parameters_to_compare = NULL,
 .handle_inadmissibles  = c("replace", "drop", "ignore"),
 .R_permutation         = 499,
 .R_bootstrap           = 499,
 .saturated             = FALSE,
 .seed                  = NULL,
 .type_vcv              = c("indicator", "construct"),
 .verbose               = TRUE
 )

Arguments

.object

An R object of class cSEMResults resulting from a call to csem().

.alpha

An integer or a numeric vector of significance levels. Defaults to 0.05.

.approach_p_adjust

Character string or a vector of character strings. Approach used to adjust the p-value in multiple testing. See the methods argument of stats::p.adjust() for a list of choices and their description. Defaults to "none".

.approach_mgd

Character string or a vector of character strings. Approach used for the multi-group comparison. One of: "all", "Klesel", "Chin", "Sarstedt", "Keil, "Nitzl", or "Henseler". Default to "all" in which case all approaches are computed (if possible). Note that the output will be quite long in this case.

.parameters_to_compare

A model in lavaan model syntax indicating which parameters (i.e, path (~), loadings (=~), weights (<~), or correlations (~~)) should be compared across groups. Defaults to NULL in which case all parameters of the originally specified model are compared.

.handle_inadmissibles

Character string. How should inadmissible results be treated? One of "drop", "ignore", or "replace". If "drop", all replications/resamples yielding an inadmissible result will be dropped (i.e. the number of results returned will potentially be less than .R). For "ignore" all results are returned even if all or some of the replications yielded inadmissible results (i.e. number of results returned is equal to .R). For "replace" resampling continues until there are exactly .R admissible solutions. Defaults to "replace" to accommodate all approaches.

.R_permutation

Integer. The number of permutations. Defaults to 499

.R_bootstrap

Integer. The number of bootstrap runs. Ignored if .object contains resamples. Defaults to 499

.saturated

Logical. Should a saturated structural model be used? Defaults to FALSE.

.seed

Integer or NULL. The random seed to use. Defaults to NULL in which case an arbitrary seed is chosen. Note that the scope of the seed is limited to the body of the function it is used in. Hence, the global seed will not be altered!

.type_vcv

Character string. Which model-implied correlation matrix is calculated? One of "indicator" or "construct". Defaults to "indicator".

.verbose

Logical. Should information (e.g., progress bar) be printed to the console? Defaults to TRUE.

Value

A list of class cSEMTestMGD. Technically, cSEMTestMGD is a named list containing the following list elements:

$Information

Additional information.

$Klesel

A list with elements, Test_statistic, P_value, and Decision

$Chin

A list with elements, Test_statistic, P_value, Decision, and Decision_overall

$Sarstedt

A list with elements, Test_statistic, P_value, Decision, and Decision_overall

$Keil

A list with elements, Test_statistic, P_value, Decision, and Decision_overall

$Nitzl

A list with elements, Test_statistic, P_value, Decision, and Decision_overall

$Henseler

A list with elements, Test_statistic, P_value, Decision, and Decision_overall

Details

The following tests are implemented:

Approach suggested by Klesel2019;textualcSEM

The model-implied variance-covariance matrix (either indicator (.type_vcv = "indicator") or construct (.type_vcv = "construct")) is compared across groups.

To measure the distance between the model-implied variance-covariance matrices, the geodesic distance (dG) and the squared Euclidean distance (dL) are used. If more than two groups are compared, the average distance over all groups is used.

Approach suggested by Sarstedt2011;textualcSEM

Groups are compared in terms of parameter differences across groups. Sarstedt2011;textualcSEM tests if parameter k is equal across all groups. If several parameters are tested simultaneously it is recommended to adjust the significance level or the p-values (in cSEM correction is done by p-value). By default no multiple testing correction is done, however, several common adjustments are available via .approach_p_adjust. See stats::p.adjust() for details. Note: the test has some severe shortcomings. Use with caution.

Approach suggested by Chin2010;textualcSEM

Groups are compared in terms of parameter differences across groups. Chin2010;textualcSEM tests if parameter k is equal between two groups. If more than two groups are tested for equality, parameter k is compared between all pairs of groups. In this case, it is recommended to adjust the significance level or the p-values (in cSEM correction is done by p-value) since this is essentially a multiple testing setup. If several parameters are tested simultaneously, correction is by group and number of parameters. By default no multiple testing correction is done, however, several common adjustments are available via .approach_p_adjust. See stats::p.adjust() for details.

Approach suggested by Keil2000;textualcSEM

Groups are compared in terms of parameter differences across groups. Keil2000;textualcSEM tests if parameter k is equal between two groups. It is assumed, that the standard errors of the coefficients are equal across groups. The calculation of the standard error of the parameter difference is adjusted as proposed by Henseler2009;textualcSEM. If more than two groups are tested for equality, parameter k is compared between all pairs of groups. In this case, it is recommended to adjust the significance level or the p-values (in cSEM correction is done by p-value) since this is essentially a multiple testing setup. If several parameters are tested simultaneously, correction is by group and number of parameters. By default no multiple testing correction is done, however, several common adjustments are available via .approach_p_adjust. See stats::p.adjust() for details.

Approach suggested by Nitzl2010;textualcSEM

Groups are compared in terms of parameter differences across groups. Similarly to Keil2000;textualcSEM, a single parameter k is tested for equality between two groups. In contrast to Keil2000;textualcSEM, it is assumed, that the standard errors of the coefficients are unequal across groups Sarstedt2011cSEM. If more than two groups are tested for equality, parameter k is compared between all pairs of groups. In this case, it is recommended to adjust the significance level or the p-values (in cSEM correction is done by p-value) since this is essentially a multiple testing setup. If several parameters are tested simultaneously, correction is by group and number of parameters. By default no multiple testing correction is done, however, several common adjustments are available via .approach_p_adjust. See stats::p.adjust() for details.

Approach suggested by Henseler2007a;textualcSEM and Henseler2009;textualcSEM

This approach is also known as PLS-MGA Henseler2009,Sarstedt2011cSEM. It tests whether a population parameter of group 1 is larger than or equal to the population parameter of group 2. In doing so, we make a comparison between all the bias-corrected bootstrap estimates of group 1 with group 2. The outcome is an estimated probability. The decision is based on whether this probability is smaller than .alpha or larger than 1 - .alpha. Therefore, two null hypotheses are tested, namely H_0: theta_1 <= theta_2 and H_0: theta_1 >= theta_2. As a consequence, it is currently not possible to adjust the p-value in case of multiple comparisons, i.e., .approach_p_adjust is ignored.

Use .approach_mgd to choose the approach. By default all approaches are computed (.approach_mgd = "all").

By default, approaches based on parameter differences across groups compare all parameters (.parameters_to_compare = NULL). To compare only a subset of parameters provide the parameters in lavaan model syntax just like the model to estimate. Take the simple model:

model_to_estimate <- "
Structural model
eta2 ~ eta1
eta3 ~ eta1 + eta2

# Each concept os measured by 3 indicators, i.e., modeled as latent variable eta1 =~ y11 + y12 + y13 eta2 =~ y21 + y22 + y23 eta3 =~ y31 + y32 + y33 "

If only the path from eta1 to eta3 and the loadings of eta1 are to be compared across groups, write:

to_compare <- "
Structural parameters to compare
eta3 ~ eta1

# Loadings to compare eta1 =~ y11 + y12 + y13 "

Note that the "model" provided to .parameters_to_compare does not have to be an estimable model!

Note also that compared to all other functions in cSEM using the argument, .handle_inadmissibles defaults to "replace" to accomdate the Sarstedt et al. (2011) approach.

Argument .R_permuation is ignored for the "Nitzl" and the "Keil" approach. .R_bootstrap is ignored if .object already contains resamples, i.e. has class cSEMResults_resampled and if only the "Klesel" or the "Chin" approach are used.

The argument .saturated is used by "Klesel" only. If .saturated = TRUE the original structural model is ignored and replaced by a saturated model, i.e. a model in which all constructs are allowed to correlate freely. This is useful to test differences in the measurement models between groups in isolation.

References

See Also

csem(), cSEMResults, testMICOM(), testOMF()

Examples

Run this code
# NOT RUN {
# ===========================================================================
# Basic usage
# ===========================================================================
model <- "
# Structural model
QUAL ~ EXPE
EXPE ~ IMAG
SAT  ~ IMAG + EXPE + QUAL + VAL
LOY  ~ IMAG + SAT
VAL  ~ EXPE + QUAL

# Measurement model

EXPE <~ expe1 + expe2 + expe3 + expe4 + expe5
IMAG <~ imag1 + imag2 + imag3 + imag4 + imag5
LOY  =~ loy1  + loy2  + loy3  + loy4
QUAL =~ qual1 + qual2 + qual3 + qual4 + qual5
SAT  <~ sat1  + sat2  + sat3  + sat4
VAL  <~ val1  + val2  + val3  + val4
"

## Create list of virtually identical data sets
dat <- list(satisfaction[-3,], satisfaction[-5, ], satisfaction[-10, ])
out <- csem(dat, model, .resample_method = "bootstrap", .R = 40) 

## Test 
testMGD(out, .R_permutation = 40,.verbose = FALSE)

# Notes: 
#  1. .R_permutation (and .R in the call to csem) is small to make examples run quicker; 
#     should be higher in real applications.
#  2. Test will not reject their respective H0s since the groups are virtually
#     identical.
#  3. Only exception is the approach suggested by Sarstedt et al. (2011), a
#     sign that the test is unreliable.
#  4. As opposed to other functions involving the argument, 
#     '.handle_inadmissibles' the default is "replace" as this is
#     required by Sarstedt et al. (2011)'s approach.

# ===========================================================================
# Extended usage
# ===========================================================================
### Test only a subset ------------------------------------------------------
# By default all parameters are compared. Select a subset by providing a 
# model in lavaan model syntax:

to_compare <- "
# Path coefficients
QUAL ~ EXPE

# Loadings
EXPE <~ expe1 + expe2 + expe3 + expe4 + expe5
"

## Test 
testMGD(out, .parameters_to_compare = to_compare, .R_permutation = 20, 
        .R_bootstrap = 20, .verbose = FALSE)

### Different p_adjustments --------------------------------------------------
# To adjust p-values to accommodate multiple testing use .approach_p_adjust. 
# The number of tests to use for adjusting depends on the approach chosen. For
# the Chin approach for example it is the number of parameters to test times the
# number of possible group comparisons. To compare the results for different
# adjustments, a vector of p-adjustments may be chosen.

## Test 
testMGD(out, .parameters_to_compare = to_compare, 
        .approach_p_adjust = c("none", "bonferroni"),
        .R_permutation = 20, .R_bootstrap = 20, .verbose = FALSE)
# }

Run the code above in your browser using DataCamp Workspace