modsem_da()
is a function for estimating interaction effects between latent variables
in structural equation models (SEMs) using distributional analytic (DA) approaches.
Methods for estimating interaction effects in SEMs can basically be split into
two frameworks:
1. Product Indicator-based approaches ("dblcent"
, "rca"
, "uca"
,
"ca"
, "pind"
)
2. Distributionally based approaches ("lms"
, "qml"
).
modsem_da()
handles the latter and can estimate models using both QML and LMS,
necessary syntax, and variables for the estimation of models with latent product indicators.
NOTE: Run default_settings_da
to see default arguments.
modsem_da(
model.syntax = NULL,
data = NULL,
method = "lms",
verbose = NULL,
optimize = NULL,
nodes = NULL,
convergence.abs = NULL,
convergence.rel = NULL,
optimizer = NULL,
center.data = NULL,
standardize.data = NULL,
standardize.out = NULL,
standardize = NULL,
mean.observed = NULL,
cov.syntax = NULL,
double = NULL,
calc.se = NULL,
FIM = NULL,
EFIM.S = NULL,
OFIM.hessian = NULL,
EFIM.parametric = NULL,
robust.se = NULL,
R.max = NULL,
max.iter = NULL,
max.step = NULL,
start = NULL,
epsilon = NULL,
quad.range = NULL,
adaptive.quad = NULL,
adaptive.frequency = NULL,
n.threads = NULL,
algorithm = NULL,
em.control = NULL,
...
)
modsem_da
object
lavaan
syntax
dataframe
method to use:
"lms"
= latent model structural equations (not passed to lavaan
).
"qml"
= quasi maximum likelihood estimation of latent model structural equations (not passed to lavaan
).
should estimation progress be shown
should starting parameters be optimized
number of quadrature nodes (points of integration) used in lms
,
increased number gives better estimates but slower computation. How many are needed depends on the complexity of the model.
For simple models, somewhere between 16-24 nodes should be enough; for more complex models, higher numbers may be needed.
For models where there is an interaction effect between an endogenous and exogenous variable,
the number of nodes should be at least 32, but practically (e.g., ordinal/skewed data), more than 32 is recommended. In cases
where data is non-normal, it might be better to use the qml
approach instead. For large
numbers of nodes, you might want to change the 'quad.range'
argument.
Absolute convergence criterion. Lower values give better estimates but slower computation. Not relevant when using the QML approach. For the LMS approach the EM-algorithm stops whenever the relative or absolute convergence criterion is reached.
Relative convergence criterion. Lower values give better estimates but slower computation. For the LMS approach the EM-algorithm stops whenever the relative or absolute convergence criterion is reached.
optimizer to use, can be either "nlminb"
or "L-BFGS-B"
. For LMS, "nlminb"
is recommended.
For QML, "L-BFGS-B"
may be faster if there is a large number of iterations, but slower if there are few iterations.
should data be centered before fitting model
should data be scaled before fitting model, will be overridden by
standardize
if standardize
is set to TRUE
.
NOTE: It is recommended that you estimate the model normally and then standardize the output using
standardize_model
standardized_estimates
, summary(<modsem_da-object>, standardize=TRUE)
should output be standardized (note will alter the relationships of parameter constraints since parameters are scaled unevenly, even if they have the same label). This does not alter the estimation of the model, only the output.
NOTE: It is recommended that you estimate the model normally and then standardize the output using
standardized_estimates
.
will standardize the data before fitting the model, remove the mean
structure of the observed variables, and standardize the output. Note that standardize.data
,
mean.observed
, and standardize.out
will be overridden by standardize
if standardize
is set to TRUE
.
NOTE: It is recommended that you estimate the model normally and then standardize the output using
standardized_estimates
.
should the mean structure of the observed variables be estimated?
This will be overridden by standardize
if standardize
is set to TRUE
.
NOTE: Not recommended unless you know what you are doing.
model syntax for implied covariance matrix (see vignette("interaction_two_etas", "modsem")
)
try to double the number of dimensions of integration used in LMS,
this will be extremely slow but should be more similar to mplus
.
should standard errors be computed? NOTE: If FALSE
, the information matrix will not be computed either.
should the Fisher information matrix be calculated using the observed or expected values? Must be either "observed"
or "expected"
.
if the expected Fisher information matrix is computed, EFIM.S
selects the number of Monte Carlo samples. Defaults to 100.
NOTE: This number should likely be increased for better estimates (e.g., 1000-10000), but it might drasticly increase computation time.
should the observed Fisher information be computed using the Hessian? If FALSE
, it is computed using the gradient.
should data for calculating the expected Fisher information matrix be
simulated parametrically (simulated based on the assumptions and implied parameters
from the model), or non-parametrically (stochastically sampled)? If you believe that
normality assumptions are violated, EFIM.parametric = FALSE
might be the better option.
should robust standard errors be computed? Meant to be used for QML, can be unreliable with the LMS approach.
Maximum population size (not sample size) used in the calculated of the expected fischer information matrix.
maximum number of iterations.
maximum steps for the M-step in the EM algorithm (LMS).
starting parameters.
finite difference for numerical derivatives.
range in z-scores to perform numerical integration in LMS using,
when using quasi-adaptive Gaussian-Hermite Quadratures. By default Inf
, such that f(t)
is integrated from -Inf to Inf,
but this will likely be inefficient and pointless at a large number of nodes. Nodes outside
+/- quad.range
will be ignored.
should a quasi adaptive quadrature be used? If TRUE
, the quadrature nodes will be adapted to the data.
If FALSE
, the quadrature nodes will be fixed. Default is FALSE
. The adaptive quadrature does not fit an adaptive
quadrature to each participant, but instead tries to place more nodes where posterior distribution is highest. Compared with a
fixed Gauss Hermite quadrature this usually means that less nodes are placed at the tails of the distribution.
How often should the quasi-adaptive quadrature be calculated? Defaults to 3, meaning that it is recalculated every third EM-iteration.
number of cores to use for parallel processing. If NULL
, it will use <= 2 threads.
If an integer is specified, it will use that number of threads (e.g., n.threads = 4
will use 4 threads).
If "default"
, it will use the default number of threads (2).
If "max"
, it will use all available threads, "min"
will use 1 thread.
algorithm to use for the EM algorithm. Can be either "EM"
or "EMA"
.
"EM"
is the standard EM algorithm. "EMA"
is an
accelerated EM procedure that uses Quasi-Newton and Fisher Scoring
optimization steps when needed. Default is "EM"
.
a list of control parameters for the EM algorithm. See default_settings_da
for defaults.
additional arguments to be passed to the estimation function.
library(modsem)
# For more examples, check README and/or GitHub.
# One interaction
m1 <- "
# Outer Model
X =~ x1 + x2 +x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner model
Y ~ X + Z + X:Z
"
if (FALSE) {
# QML Approach
est1 <- modsem_da(m1, oneInt, method = "qml")
summary(est1)
# Theory Of Planned Behavior
tpb <- "
# Outer Model (Based on Hagger et al., 2007)
ATT =~ att1 + att2 + att3 + att4 + att5
SN =~ sn1 + sn2
PBC =~ pbc1 + pbc2 + pbc3
INT =~ int1 + int2 + int3
BEH =~ b1 + b2
# Inner Model (Based on Steinmetz et al., 2011)
# Covariances
ATT ~~ SN + PBC
PBC ~~ SN
# Causal Relationships
INT ~ ATT + SN + PBC
BEH ~ INT + PBC
BEH ~ INT:PBC
"
# LMS Approach
estTpb <- modsem_da(tpb, data = TPB, method = lms, EFIM.S = 1000)
summary(estTpb)
}
Run the code above in your browser using DataLab