Structural Equation Modeling (SEM) is a powerful tool for confirming multivariate structures and is well done by the lavaan, sem, or OpenMx packages. Because they are confirmatory, SEM models test specific models. Exploratory Structural Equation Modeling (ESEM), on the other hand, takes a more exploratory approach. By using factor extension, it is possible to extend the factors of one set of variables (X) into the variable space of another set (Y). Using this technique, it is then possible to estimate the correlations between the two sets of latent variables, much the way normal SEM would do. Based upon exploratory factor analysis (EFA) this approach provides a quick and easy approach to do exploratory structural equation modeling.

```
esem(r, varsX, varsY, nfX = 1, nfY = 1, n.obs = NULL, fm = "minres",
rotate = "oblimin", plot = TRUE, cor = "cor", use = "pairwise",weight=NULL, ...)
esem.diagram(esem=NULL,labels=NULL,cut=.3,errors=FALSE,simple=TRUE,
regression=FALSE,lr=TRUE, digits=1,e.size=.1,adj=2,
main="Exploratory Structural Model", ...)
interbattery(r, varsX, varsY, nfX = 1, nfY = 1, n.obs = NULL,cor = "cor",
use = "pairwise",weight=NULL)
```

r

A correlation matrix or a raw data matrix suitable for factor analysis

varsX

The variables defining set X

varsY

The variables defining set Y

nfX

The number of factors to extract for the X variables

nfY

The number of factors to extract for the Y variables

n.obs

Number of observations (needed for eBIC and chi square), can be ignored.

fm

The factor method to use, e.g., "minres", "mle" etc. (see fa for details)

rotate

Which rotation to use. (see fa for details)

plot

If TRUE, draw the esem.diagram

cor

What options for to use for correlations (see fa for details)

use

"pairwise" for pairwise complete data, for other options see cor

weight

Weights to apply to cases when finding wt.cov

…

other parameters to pass to fa or to esem.diagram functions.

esem

The object returned from esem and passed to esem.diagram

labels

Variable labels

cut

Loadings with abs(loading) > cut will be shown

simple

Only the biggest loading per item is shown

errors

include error estimates (as arrows)

e.size

size of ellipses (adjusted by the number of variables)

digits

Round coefficient to digits

adj

loadings are adjusted by factor number mod adj to decrease likelihood of overlap

main

Graphic title, defaults to "Exploratory Structural Model"

lr

draw the graphic left to right (TRUE) or top to bottom (FALSE)

regression

Not yet implemented

The amount of variance in each of the X and Y variables accounted for by the total model.

The amount of variance accounted for by each factor -- independent of the other factors.

Degrees of freedom of the model

Degrees of freedom of the null model (the correlation matrix)

chi square of the null model

chi square of the model. This is found by examining the size of the residuals compared to their standard error.

The root mean square of the residuals.

Harmonic sample size if using min.chi for factor extraction.

Probability of the Emprical Chi Square given the hypothesis of an identity matrix.

Adjusted root mean square residual

When normal theory fails (e.g., in the case of non-positive definite matrices), it useful to examine the empirically derived EBIC based upon the empirical \(\chi^2\) - 2 df.

Sample size adjusted empirical BIC

sum of squared residuals versus sum of squared original values

fit applied to the off diagonal elements

standard deviation of the residuals

Number of factors extracted

Item complexity

Number of total observations

The factor pattern matrix for the combined X and Y factors

The factor structure matrix for the combined X and Y factors

Just the X set of loadings (pattern) without the extension variables.

Just the Y set of loadings (pattern) without the extension variables.

The correlations of the X factors

the correlations of the Y factors

the correlations of the X and Y factors within the selves and across sets.

The factor method used

The complete factor analysis output for the X set

The complete factor analysis output for the Y set

The residual correlation matrix (R - model). May be examined by a call to residual().

Echo back the original call to the function.

model code for SEM and for lavaan to do subsequent confirmatory modeling

Factor analysis as implemented in `fa`

attempts to summarize the covariance (correlational) structure of a set of variables with a small set of latent variables or ``factors". This solution may be `extended' into a larger space with more variables without changing the original solution (see `fa.extension`

. Similarly, the factors of a second set of variables (the Y set) may be extended into the original (X ) set. Doing so allows two independent measurement models, a measurement model for X and a measurement model for Y. These two sets of latent variables may then be correlated for an Exploratory Structural Equation Model. (This is exploratory because it is based upon exploratory factor analysis (EFA) rather than a confirmatory factor model (CFA) using more traditional Structural Equation Modeling packages such as sem, lavaan, or Mx.)

Although the output seems very similar to that of a normal EFA using `fa`

, it is actually two independent factor analyses (of the X and the Y sets) that are then mutually extended into each other. That is, the loadings and structure matrices from sets X and Y are merely combined, and the correlations between the two sets of factors are found.

Interbattery factor analysis was developed by Tucker (1958) as a way of comparing the factors in common to two batteries of tests. (Currently under development and not yet complete). Using some straight forward linear algebra It is easy to find the factors of the intercorrelations between the two sets of variables. This does not require estimating communalities and is highly related to the procedures of canonical correlation.

The difference between the esem and the interbattery approach is that the first factors the X set and then relates those factors to factors of the Y set. Interbattery factor analysis, on the other hand, tries to find one set of factors that links both sets but is still distinct from factoring both sets together.

Revelle, William. (in prep) An introduction to psychometric theory with applications in R. Springer. Working draft available at https://personality-project.org/r/book/

Tucker, Ledyard (1958) An inter-battery method of factor analysis, Psychometrika, 23, 111-136.

`principal`

for principal components analysis (PCA). PCA will give very similar solutions to factor analysis when there are many variables. The differences become more salient as the number variables decrease. The PCA and FA models are actually very different and should not be confused. One is a model of the observed variables, the other is a model of latent variables.

`irt.fa`

for Item Response Theory analyses using factor analysis, using the two parameter IRT equivalent of loadings and difficulties.

`VSS`

will produce the Very Simple Structure (VSS) and MAP criteria for the number of factors, `nfactors`

to compare many different factor criteria.

`ICLUST`

will do a hierarchical cluster analysis alternative to factor analysis or principal components analysis.

`predict.psych`

to find predicted scores based upon new data, `fa.extension`

to extend the factor solution to new variables, `omega`

for hierarchical factor analysis with one general factor.
codefa.multi for hierarchical factor analysis with an arbitrary number of higher order factors.

`fa.sort`

will sort the factor loadings into echelon form. `fa.organize`

will reorganize the factor pattern matrix into any arbitrary order of factors and items.

`KMO`

and `cortest.bartlett`

for various tests that some people like.

`factor2cluster`

will prepare unit weighted scoring keys of the factors that can be used with `scoreItems`

.

`fa.lookup`

will print the factor analysis loadings matrix along with the item ``content" taken from a dictionary of items. This is useful when examining the meaning of the factors.

`anova.psych`

allows for testing the difference between two (presumably nested) factor models .

```
# NOT RUN {
#make up a sem like problem using sim.structure
fx <-matrix(c( .9,.8,.6,rep(0,4),.6,.8,-.7),ncol=2)
fy <- matrix(c(.6,.5,.4),ncol=1)
rownames(fx) <- c("V","Q","A","nach","Anx")
rownames(fy)<- c("gpa","Pre","MA")
Phi <-matrix( c(1,0,.7,.0,1,.7,.7,.7,1),ncol=3)
gre.gpa <- sim.structural(fx,Phi,fy)
print(gre.gpa)
#now esem it:
example <- esem(gre.gpa$model,varsX=1:5,varsY=6:8,nfX=2,nfY=1,n.obs=1000,plot=FALSE)
example
esem.diagram(example,simple=FALSE)
#compare two alternative solutions to the first 2 factors of the neo.
#solution 1 is the normal 2 factor solution.
#solution 2 is an esem with 1 factor for the first 6 variables, and 1 for the second 6.
f2 <- fa(psychTools::neo[1:12,1:12],2)
es2 <- esem(psychTools::neo,1:6,7:12,1,1)
summary(f2)
summary(es2)
fa.congruence(f2,es2)
interbattery(Thurstone.9,1:4,5:9,2,2) #compare to the solution of Tucker. We are not there yet.
# }
```

Run the code above in your browser using DataCamp Workspace