This is the family of models that models only a variance-covariance matrix with mean structure. The type
argument can be used to define what model is used: type = "cov"
(default) models a variance-covariance matrix directly, type = "chol"
(alias: cholesky()
) models a Cholesky decomposition, type = "prec"
(alias: precision()
) models a precision matrix, type = "ggm"
(alias: ggm()
) models a Gaussian graphical model (Epskamp, Rhemtulla and Borsboom, 2017), and type = "cor"
(alias: corr()
) models a correlation matrix.
varcov(data, type = c("cov", "chol", "prec", "ggm", "cor"),
sigma = "full", kappa = "full", omega = "full",
lowertri = "full", delta = "full", rho = "full", SD =
"full", mu, tau, vars, ordered = character(0), groups,
covs, means, nobs, missing = "listwise", equal =
"none", baseline_saturated = TRUE, estimator =
"default", optimizer, storedata = FALSE, WLS.W,
sampleStats, meanstructure, corinput, verbose = FALSE,
covtype = c("choose", "ML", "UB"), standardize =
c("none", "z", "quantile"), fullFIML = FALSE)
cholesky(…)
precision(…)
prec(…)
ggm(…)
corr(…)
A data frame encoding the data used in the analysis. Can be missing if covs
and nobs
are supplied.
The type of model used. See description.
Only used when type = "cov"
. Either "full"
to estimate every element freely, "empty"
to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
Only used when type = "prec"
. Either "full"
to estimate every element freely, "empty"
to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
Only used when type = "ggm"
. Either "full"
to estimate every element freely, "empty"
to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
Only used when type = "chol"
. Either "full"
to estimate every element freely, "empty"
to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
Only used when type = "ggm"
. Either "full"
to estimate every element freely, "empty"
to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
Only used when type = "cor"
. Either "full"
to estimate every element freely, "empty"
to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
Only used when type = "cor"
. Either "full"
to estimate every element freely, "empty"
to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
Optional vector encoding the mean structure. Set elements to 0 to indicate fixed to zero constrains, 1 to indicate free means, and higher integers to indicate equality constrains. For multiple groups, this argument can be a list or array with each element/column encoding such a vector.
Optional list encoding the thresholds per variable.
An optional character vector encoding the variables used in the analyis. Must equal names of the dataset in data
.
An optional string indicating the name of the group variable in data
.
A sample variance--covariance matrix, or a list/array of such matrices for multiple groups. Make sure covtype
argument is set correctly to the type of covariances used.
A vector of sample means, or a list/matrix containing such vectors for multiple groups.
The number of observations used in covs
and means
, or a vector of such numbers of observations for multiple groups.
If 'covs' is used, this is the type of covariance (maximum likelihood or unbiased) the input covariance matrix represents. Set to "ML"
for maximum likelihood estimates (denominator n) and "UB"
to unbiased estimates (denominator n-1). The default will try to find the type used, by investigating which is most likely to result from integer valued datasets.
How should missingness be handled in computing the sample covariances and number of observations when data
is used. Can be "listwise"
for listwise deletion, or "pairwise"
for pairwise deletion.
A character vector indicating which matrices should be constrained equal across groups.
A logical indicating if the baseline and saturated model should be included. Mostly used internally and NOT Recommended to be used manually.
The estimator to be used. Currently implemented are "ML"
for maximum likelihood estimation, "FIML"
for full-information maximum likelihood estimation, "ULS"
for unweighted least squares estimation, "WLS"
for weighted least squares estimation, and "DWLS"
for diagonally weighted least squares estimation.
The optimizer to be used. Can be one of "nlminb"
(the default R nlminb
function), "ucminf"
(from the optimr
package), and C++ based optimizers "cpp_L-BFGS-B"
, "cpp_BFGS"
, "cpp_CG"
, "cpp_SANN"
, and "cpp_Nelder-Mead"
. The C++ optimizers are faster but slightly less stable. Defaults to "nlminb"
.
Logical, should the raw data be stored? Needed for bootstrapping (see bootstrap
).
Which standardization method should be used? "none"
(default) for no standardization, "z"
for z-scores, and "quantile"
for a non-parametric transformation to the quantiles of the marginal standard normal distribution.
Optional WLS weights matrix.
An optional sample statistics object. Mostly used internally.
Logical, should progress be printed to the console?
A vector with strings indicating the variables that are ordered catagorical, or set to TRUE
to model all variables as ordered catagorical.
Logical, should the meanstructure be modeled explicitly?
Logical, is the input a correlation matrix?
Logical, should row-wise FIML be used? Not recommended!
Arguments sent to varcov
An object of the class psychonetrics
The model used in this family is:
\(\mathrm{var}(\boldsymbol{y} ) = \boldsymbol{\Sigma}\)
\(\mathcal{E}( \boldsymbol{y} ) = \boldsymbol{\mu}\)
in which the covariance matrix can further be modeled in three ways. With type = "chol"
as Cholesky decomposition:
\(\boldsymbol{\Sigma} = \boldsymbol{L}\boldsymbol{L}\),
with type = "prec"
as Precision matrix:
\(\boldsymbol{\Sigma} = \boldsymbol{K}^{-1}\),
and finally with type = "ggm"
as Gaussian graphical model:
\(\boldsymbol{\Sigma} = \boldsymbol{\Delta}(\boldsymbol{I} - \boldsymbol{\Omega})^(-1) \boldsymbol{\Delta}\).
Epskamp, S., Rhemtulla, M., & Borsboom, D. (2017). Generalized network psychometrics: Combining network and latent variable models. Psychometrika, 82(4), 904-927.
# NOT RUN {
# Load bfi data from psych package:
library("psychTools")
data(bfi)
# Also load dplyr for the pipe operator:
library("dplyr")
# Let's take the agreeableness items, and gender:
ConsData <- bfi %>%
select(A1:A5, gender) %>%
na.omit # Let's remove missingness (otherwise use Estimator = "FIML)
# Define variables:
vars <- names(ConsData)[1:5]
# Let's fit an empty GGM:
mod0 <- ggm(ConsData, vars = vars, omega = "empty")
# Run the model:
mod0 <- mod0 %>% runmodel
# }
# NOT RUN {
# We can look at the modification indices:
mod0 %>% MIs
# To automatically add along modification indices, we can use stepup:
mod1 <- mod0 %>% stepup
# Let's also prune all non-significant edges to finish:
mod1 <- mod1 %>% prune
# Look at the fit:
mod1 %>% fit
# Compare to original (baseline) model:
compare(baseline = mod0, adjusted = mod1)
# We can also look at the parameters:
mod1 %>% parameters
# Or obtain the network as follows:
getmatrix(mod1, "omega")
# }
Run the code above in your browser using DataLab