Learn R Programming

psychonetrics (version 0.4)

varcov: Variance-covariance family of psychonetrics models

Description

This is the family of models that models only a variance-covariance matrix with mean structure. The type argument can be used to define what model is used: type = "cov" (default) models a variance-covariance matrix directly, type = "chol" (alias: cholesky()) models a Cholesky decomposition, type = "prec" (alias: precision()) models a precision matrix, type = "ggm" (alias: ggm()) models a Gaussian graphical model (Epskamp, Rhemtulla and Borsboom, 2017), and type = "cor" (alias: corr()) models a correlation matrix.

Usage

varcov(data, type = c("cov", "chol", "prec", "ggm", "cor"),
                 sigma = "full", kappa = "full", omega = "full",
                 lowertri = "full", delta = "full", rho = "full", SD =
                 "full", mu, tau, vars, ordered = character(0), groups,
                 covs, means, nobs, missing = "listwise", equal =
                 "none", baseline_saturated = TRUE, estimator =
                 "default", optimizer = "default", storedata = FALSE,
                 WLS.V, sampleStats, meanstructure, corinput, verbose =
                 TRUE)
cholesky(…)
precision(…)
prec(…)
ggm(…)
corr(…)

Arguments

data

A data frame encoding the data used in the analysis. Can be missing if covs and nobs are supplied.

type

The type of model used. See description.

sigma

Only used when type = "cov". Either "full" to estimate every element freely, "empty" to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.

kappa

Only used when type = "prec". Either "full" to estimate every element freely, "empty" to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.

omega

Only used when type = "ggm". Either "full" to estimate every element freely, "empty" to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.

lowertri

Only used when type = "chol". Either "full" to estimate every element freely, "empty" to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.

delta

Only used when type = "ggm". Either "full" to estimate every element freely, "empty" to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.

rho

Only used when type = "cor". Either "full" to estimate every element freely, "empty" to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.

SD

Only used when type = "cor". Either "full" to estimate every element freely, "empty" to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.

mu

Optional vector encoding the mean structure. Set elements to 0 to indicate fixed to zero constrains, 1 to indicate free means, and higher integers to indicate equality constrains. For multiple groups, this argument can be a list or array with each element/column encoding such a vector.

tau

Optional list encoding the thresholds per variable.

vars

An optional character vector encoding the variables used in the analyis. Must equal names of the dataset in data.

groups

An optional string indicating the name of the group variable in data.

covs

A sample variance--covariance matrix, or a list/array of such matrices for multiple groups. IMPORTANT NOTE: psychonetrics expects the maximum likelihood (ML) covariance matrix, which is NOT obtained from cov directly. Manually rescale the result of cov with (nobs - 1)/nobs to obtain the ML covariance matrix.

means

A vector of sample means, or a list/matrix containing such vectors for multiple groups.

nobs

The number of observations used in covs and means, or a vector of such numbers of observations for multiple groups.

missing

How should missingness be handled in computing the sample covariances and number of observations when data is used. Can be "listwise" for listwise deletion, or "pairwise" for pairwise deletion.

equal

A character vector indicating which matrices should be constrained equal across groups.

baseline_saturated

A logical indicating if the baseline and saturated model should be included. Mostly used internally and NOT Recommended to be used manually.

estimator

The estimator to be used. Currently implemented are "ML" for maximum likelihood estimation, "FIML" for full-information maximum likelihood estimation, "ULS" for unweighted least squares estimation, "WLS" for weighted least squares estimation, and "DWLS" for diagonally weighted least squares estimation.

optimizer

The optimizer to be used. Usually either "nlminb" (with box constrains) or "ucminf" (ignoring box constrains), but any optimizer supported by optimr can be used.

storedata

Logical, should the raw data be stored? Needed for bootstrapping (see bootstrap).

WLS.V

Optional WLS weights matrix.

sampleStats

An optional sample statistics object. Mostly used internally.

ordered

A vector with strings indicating the variables that are ordered catagorical, or set to TRUE to model all variables as ordered catagorical.

meanstructure

Logical, should the meanstructure be modeled explicitly?

corinput

Logical, is the input a correlation matrix?

verbose

Logical, should messages be printed?

Arguments sent to varcov

Value

An object of the class psychonetrics

Details

The model used in this family is:

\(\mathrm{var}(\boldsymbol{y} ) = \boldsymbol{\Sigma}\)

\(\mathcal{E}( \boldsymbol{y} ) = \boldsymbol{\mu}\)

in which the covariance matrix can further be modeled in three ways. With type = "chol" as Cholesky decomposition:

\(\boldsymbol{\Sigma} = \boldsymbol{L}\boldsymbol{L}\),

with type = "prec" as Precision matrix:

\(\boldsymbol{\Sigma} = \boldsymbol{K}^{-1}\),

and finally with type = "ggm" as Gaussian graphical model:

\(\boldsymbol{\Sigma} = \boldsymbol{\Delta}(\boldsymbol{I} - \boldsymbol{\Omega})^(-1) \boldsymbol{\Delta}\).

References

Epskamp, S., Rhemtulla, M., & Borsboom, D. (2017). Generalized network psychometrics: Combining network and latent variable models. Psychometrika, 82(4), 904-927.

See Also

lvm, var1, dlvm1

Examples

Run this code
# NOT RUN {
# Load bfi data from psych package:
library("psychTools")
data(bfi)

# Also load dplyr for the pipe operator:
library("dplyr")

# Let's take the agreeableness items, and gender:
ConsData <- bfi %>% 
  select(A1:A5, gender) %>% 
  na.omit # Let's remove missingness (otherwise use Estimator = "FIML)

# Define variables:
vars <- names(ConsData)[1:5]

# Let's fit an empty GGM:
mod0 <- ggm(ConsData, vars = vars, omega = "empty")

# Run the model:
mod0 <- mod0 %>% runmodel

# }
# NOT RUN {
# We can look at the modification indices:
mod0 %>% MIs

# To automatically add along modification indices, we can use stepup:
mod1 <- mod0 %>% stepup(greedy = TRUE, alpha = 0.005, greedyadjust = "fdr")

# Let's also prune all non-significant edges to finish:
mod1 <- mod1 %>% prune(alpha = 0.005)

# Look at the fit:
mod1 %>% fit

# Compare to original (baseline) model:
compare(baseline = mod0, adjusted = mod1)

# We can also look at the parameters:
mod1 %>% parameters

# Or obtain the network as follows:
getmatrix(mod1, "omega")

# We may also be interested in the stability of our search algorithm.
# We can bootstrap our data and repeat the search as follows:
mod_boot <- ggm(ConsData, vars = vars, omega = "empty", storedata = TRUE) %>%
  bootstrap %>% # bootstrap data
  runmodel %>% # Run model
  stepup(greedy = TRUE, alpha = 0.005, greedyadjust = "fdr") %>% # Search algorithm 1
  prune(alpha = 0.005) # Search algorithm 2

# Which may give some different results:
getmatrix(mod_boot, "omega")

# This can be repeated (ideally 100 - 1000 times):
bootstraps <- replicate(10,simplify = FALSE,expr = {
  mod_boot <- ggm(ConsData, vars = vars, omega = "empty", storedata = TRUE) %>%
    bootstrap %>% # bootstrap data
    runmodel %>% # Run model
    stepup(greedy = TRUE, alpha = 0.005, greedyadjust = "fdr") %>% # Search algorithm 1
    prune(alpha = 0.005) # Search algorithm 2
  getmatrix(mod_boot, "omega")
})

# Now we can look at, for example, the inclusion probability:
inclusionProportion <- 1/length(bootstraps) * Reduce("+",lapply(bootstraps,function(x)1*(x!=0)))
inclusionProportion
# }

Run the code above in your browser using DataLab