An algorithm to identify whether data were generated from a
random, factor, or network model using factor and network loadings.
The algorithm uses heuristics based on theory and simulation. These
heuristics were then submitted to several deep learning neural networks
with 240,000 samples per model with varying parameters.
Usage
LCT(data, n, iter = 100)
Arguments
data
Matrix or data frame.
A dataframe with the variables to be used in the test or a correlation matrix.
If the data used is a correlation matrix, the argument n will need to be specified
n
Integer.
Sample size (if the data provided is a correlation matrix)
iter
Integer.
Number of replicate samples to be drawn from a multivariate
normal distribution (uses mvtnorm::mvrnorm).
Defaults to 100
Value
Returns a list containing:
empirical
Prediction of model based on empirical dataset only
bootstrap
Prediction of model based on means of the loadings across
the bootstrap replicate samples
proportion
Proportions of models suggested across bootstraps
References
# Original implementation of LCT
Christensen, A. P., & Golino, H. (in press).
On the equivalency of factor and network loadings.
Behavior Research Methods.
10.31234/osf.io/xakez
# Current implementation of LCT
Christensen, A. P., & Golino, H. (under review).
Random, factor, or network model? Predictions from neural networks.
PsyArXiv.
10.31234/osf.io/awkcb