fa(r,nfactors=1,n.obs = NA,n.iter=1, rotate="oblimin", scores="regression",
residuals=FALSE, SMC=TRUE, covar=FALSE,missing=FALSE,impute="median",
min.err = 0.001, max.iter = 50,symmetric=TRUE, warnings=TRUE, fm="minres",
alpha=.1,p=.05,oblique.scores=FALSE,np.obs,use="pairwise",cor="cor",...)fac(r,nfactors=1,n.obs = NA, rotate="oblimin", scores="tenBerge", residuals=FALSE,
SMC=TRUE, covar=FALSE,missing=FALSE,impute="median",min.err = 0.001,
max.iter=50,symmetric=TRUE,warnings=TRUE,fm="minres",alpha=.1,
oblique.scores=FALSE,np.obs,use="pairwise",cor="cor",...)
fa.poly(x,nfactors=1,n.obs = NA, n.iter=1, rotate="oblimin", SMC=TRUE, missing=FALSE,
impute="median", min.err = .001, max.iter=50, symmetric=TRUE, warnings=TRUE,
fm="minres",alpha=.1, p =.05,scores="regression", oblique.scores=TRUE,
weight=NULL,global=TRUE,...) #deprecated
factor.minres(r, nfactors=1, residuals = FALSE, rotate = "varimax",n.obs = NA,
scores = FALSE,SMC=TRUE, missing=FALSE,impute="median",min.err = 0.001, digits = 2,
max.iter = 50,symmetric=TRUE,warnings=TRUE,fm="minres") #deprecated
factor.wls(r,nfactors=1,residuals=FALSE,rotate="varimax",n.obs = NA,
scores=FALSE,SMC=TRUE,missing=FALSE,impute="median", min.err = .001,
digits=2,max.iter=50,symmetric=TRUE,warnings=TRUE,fm="wls") #deprecated
print.psych
with sort=TRUEVSS
, ICLUST
, and principal
for this fit statistic.factanal
(which seems to be Bartlett's test) :
$\chi^2 = (n.obs - 1 - (2 * p + 5)/6 - (2 * factors)/3)) * f$predict.psych
function to find predicted factor scores for new cases.factanal
). The existence of uniquenesses is what distinguishes factor analysis from principal components analysis (e.g., principal
). If variables are thought to represent a ``true" or latent part then factor analysis provides an estimate of the correlations with the latent factor(s) representing the data. If variables are thought to be measured without error, then principal components provides the most parsimonious description of the data. The fa function will do factor analyses using one of four different algorithms: minimum residual (minres), principal axes, weighted least squares, or maximum likelihood.
Principal axes factor analysis has a long history in exploratory analysis and is a straightforward procedure. Successive eigen value decompositions are done on a correlation matrix with the diagonal replaced with diag (FF') until $\sum(diag(FF'))$ does not change (very much). The current limit of max.iter =50 seems to work for most problems, but the Holzinger-Harmon 24 variable problem needs about 203 iterations to converge for a 5 factor solution.
Not all factor programs that do principal axes do iterative solutions. The example from the SAS manual (Chapter 26) is such a case. To achieve that solution, it is necessary to specify that the max.iterations = 1. Comparing that solution to an iterated one (the default) shows that iterations improve the solution. In addition, fm="minres" or fm="mle" produces even better solutions for this example.
Principal axes may be used in cases when maximum likelihood solutions fail to converge, although fm="minres" will also do that and tends to produce better (smaller residuals) solutions.
The fm="minchi" option is a variation on the "minres" (ols) solution and minimizes the sample size weighted residuals rather than just the residuals.
A problem in factor analysis is to find the best estimate of the original communalities. Using the Squared Multiple Correlation (SMC) for each variable will underestimate the communalities, using 1s will over estimate. By default, the SMC estimate is used. In either case, iterative techniques will tend to converge on a stable solution. If, however, a solution fails to be achieved, it is useful to try again using ones (SMC =FALSE). Alternatively, a vector of starting values for the communalities may be specified by the SMC option.
The iterated principal axes algorithm does not attempt to find the best (as defined by a maximum likelihood criterion) solution, but rather one that converges rapidly using successive eigen value decompositions. The maximum likelihood criterion of fit and the associated chi square value are reported, and will be worse than that found using maximum likelihood procedures.
The minimum residual (minres) solution is an unweighted least squares solution that takes a slightly different approach. It uses the optim
function and adjusts the diagonal elements of the correlation matrix to mimimize the squared residual when the factor model is the eigen value decomposition of the reduced matrix. MINRES and PA will both work when ML will not, for they can be used when the matrix is singular. At least on a number of test cases, the MINRES solution is slightly more similar to the ML solution than is the PA solution. To a great extent, the minres and wls solutions follow ideas in the factanal
function.
The weighted least squares (wls) solution weights the residual matrix by 1/ diagonal of the inverse of the correlation matrix. This has the effect of weighting items with low communalities more than those with high communalities.
The generalized least squares (gls) solution weights the residual matrix by the inverse of the correlation matrix. This has the effect of weighting those variables with low communalities even more than those with high communalities.
The maximum likelihood solution takes yet another approach and finds those communality values that minimize the chi square goodness of fit test. The fm="ml" option provides a maximum likelihood solution following the procedures used in factanal
but does not provide all the extra features of that function.
Test cases comparing the output to SPSS suggest that the PA algorithm matches what SPSS calls uls, and that the wls solutions are equivalent in their fits. The wls and gls solutions have slightly larger eigen values, but slightly worse fits of the off diagonal residuals than do the minres or maximum likelihood solutions. Comparing the results to the examples in Harman (76), the PA solution with no iterations matches what Harman calls Principal Axes (as does SAS), while the iterated PA solution matches his minres solution. The minres solution found in psych tends to have slightly smaller off diagonal residuals (as it should) than does the iterated PA solution.
Although for items, it is typical to find factor scores by scoring the salient items (using, e.g., score.items
) factor scores can be estimated by regression as well as several other means. There are multiple approaches that are possible (see Grice, 2001) and the one taken here was developed by tenBerge et al. (see factor.scores
. The alternative, which will match factanal is to find the scores using regression -- Thurstone's least squares regression where the weights are found by $W = R^(-1)S$ where R is the correlation matrix of the variables ans S is the structure matrix. Then, factor scores are just $Fs = X W$.
In the oblique case, the factor loadings are referred to as Pattern coefficients and are related to the Structure coefficients by $S = P \Phi$ and thus $P = S \Phi^{-1}$. When estimating factor scores, fa
and factanal
differ in that fa
finds the factors from the Structure matrix while factanal
seems to do it from the Pattern matrix. Thus, although in the orthogonal case, fa and factanal agree perfectly in their factor score estimates, they do not agree in the case of oblique factors. Setting oblique.scores = TRUE will produce factor score estimate that match those of factanal
.
It is sometimes useful to extend the factor solution to variables that were not factored. This may be done using fa.extension
. Factor extension is typically done in the case where some variables were not appropriate to factor, but factor loadings on the original factors are still desired.
For dichotomous items or polytomous items, it is recommended to analyze the tetrachoric
or polychoric
correlations rather than the Pearson correlations. This is done automatically when using irt.fa
or fa.poly
functions. In the first case, the factor analysis results are reported in Item Response Theory (IRT) terms, although the original factor solution is returned in the results. In the later case, a typical factor loadings matrix is returned, but the tetrachoric/polychoric correlation matrix and item statistics are saved for reanalysis by irt.fa
. (See also the mixed.cor
function to find correlations from a mixture of continuous, dichotomous, and polytomous items.)
Of the various rotation/transformation options, varimax, Varimax, quartimax, bentlerT, geominT, and bifactor do orthogonal rotations. Promax transforms obliquely with a target matix equal to the varimax solution. oblimin, quartimin, simplimax, bentlerQ, geominQ and biquartimin are oblique transformations. Most of these are just calls to the GPArotation package. The ``cluster'' option does a targeted rotation to a structure defined by the cluster representation of a varimax solution. With the optional "keys" parameter, the "target" option will rotate to a target supplied as a keys matrix. (See target.rot
.)
Two additional target rotation options are available through calls to GPArotation. These are the targetQ (oblique) and targetT (orthogonal) target rotations of Michael Browne. See target.rot
for more documentation.
The "bifactor" rotation implements the Jennrich and Bentler (2011) bifactor rotation by calling the GPForth function in the GPArotation package and using two functions adapted from the MatLab code of Jennrich and Bentler.
There are two varimax rotation functions. One, Varimax, in the GPArotation package does not by default apply Kaiser normalization. The other, varimax, in the stats package, does. It appears that the two rotation functions produce slightly different results even when normalization is set. For consistency with the other rotation functions, Varimax is probably preferred.
There are three ways to handle dichotomous or polytomous responses: fa
with the poly=TRUE option, fa.poly
which will return the tetrachoric or polychoric correlation matrix, as well as the normal factor analysis output, and irt.fa
which returns a two parameter irt analysis as well as the normal fa output.
When factor analyzing items with dichotomous or polytomous responses, the irt.fa
function provides an Item Response Theory representation of the factor output. The factor analysis results are available, however, as an object in the irt.fa output.
fa.poly
is appropriate if the data are categorical (but just setting the poly=TRUE option works as well). It will produce normal factor analysis output but also will save the polychoric matrix (rho) and items difficulties (tau) for subsequent irt analyses. fa.poly
will, by default, find factor scores if the data are available. The correlations are found using either tetrachoric
or polychoric
and then this matrix is factored. Weights from the factors are then applied to the original data to estimate factor scores.
The function fa
will repeat the analysis n.iter times on a bootstrapped sample of the data (if they exist) or of a simulated data set based upon the observed correlation matrix. The mean estimate and standard deviation of the estimate are returned and will print the original factor analysis as well as the alpha level confidence intervals for the estimated coefficients. The bootstrapped solutions are rotated towards the original solution using target.rot. The factor loadings are z-transformed, averaged and then back transformed. The default is to have n.iter =1 and thus not do bootstrapping.
fa.poly
will find confidence intervals for a factor solution for dichotomous or polytomous items (set n.iter > 1 to do so). But, so will fa
with the poly=TRUE option. Perhaps more useful is to find the Item Response Theory parameters equivalent to the factor loadings reported in fa.poly by using the irt.fa
function.
Some correlation matrices that arise from using pairwise deletion or from tetrachoric or polychoric matrices will not be proper. That is, they will not be positive semi-definite (all eigen values >= 0). The cor.smooth
function will adjust correlation matrices (smooth them) by making all negative eigen values slightly greater than 0, rescaling the other eigen values to sum to the number of variables, and then recreating the correlation matrix. See cor.smooth
for an example of this problem using the burt
data set.
For those who like SPSS type output, the measure of factoring adequacy known as the Kaiser-Meyer-Olkin KMO
test may be found from the correlation matrix or data matrix using the KMO
function. Similarly, the Bartlett's test of Sphericity may be found using the cortest.bartlett
function.
For those who want to have an object of the variances accounted for, this is returned invisibly by the print function.
Grice, James W. (2001), Computing and evaluating factor scores. Psychological Methods, 6, 430-450
Harman, Harry and Jones, Wayne (1966) Factor analysis by minimizing residuals (minres), Psychometrika, 31, 3, 351-368.
Hofmann, R. J. ( 1978 ) . Complexity and simplicity as objective indices descriptive of factor solutions. Multivariate Behavioral Research, 13, 247-250.
Pettersson E, Turkheimer E. (2010) Item selection, evaluation, and simple structure in personality data. Journal of research in personality, 44(4), 407-420.
Revelle, William. (in prep) An introduction to psychometric theory with applications in R. Springer. Working draft available at
principal
for principal components analysis (PCA), irt.fa
for Item Response Theory analyses using factor analysis, VSS
for the Very Simple Structure (VSS) and MAP criteria for the number of factors, nfactors
to compare many different factor criteria, ICLUST
for hierarchical cluster analysis alternatives to factor analysis or principal components analysis, predict.psych
to find predicted scores based upon new data, fa.extension
to extend the factor solution to new variables, omega
for hierarchical factor analysis. fa.sort
will sort the factor loadings into echelon form. fa.organize
will reorganize the factor pattern matrix into any arbitrary order of factors and items.
KMO
and cortest.bartlett
for various tests that some people like.
#using the Harman 24 mental tests, compare a principal factor with a principal components solution
pc <- principal(Harman74.cor$cov,4,rotate="varimax") #principal components
pa <- fa(Harman74.cor$cov,4,fm="pa" ,rotate="varimax") #principal axis
uls <- fa(Harman74.cor$cov,4,rotate="varimax") #unweighted least squares is minres
wls <- fa(Harman74.cor$cov,4,fm="wls") #weighted least squares
#to show the loadings sorted by absolute value
print(uls,sort=TRUE)
#then compare with a maximum likelihood solution using factanal
mle <- factanal(covmat=Harman74.cor$cov,factors=4)
factor.congruence(list(mle,pa,pc,uls,wls))
#note that the order of factors and the sign of some of factors may differ
#finally, compare the unrotated factor, ml, uls, and wls solutions
wls <- fa(Harman74.cor$cov,4,rotate="none",fm="wls")
pa <- fa(Harman74.cor$cov,4,rotate="none",fm="pa")
minres <- factanal(factors=4,covmat=Harman74.cor$cov,rotation="none")
mle <- fa(Harman74.cor$cov,4,rotate="none",fm="mle")
uls <- fa(Harman74.cor$cov,4,rotate="none",fm="uls")
factor.congruence(list(minres,mle,pa,wls,uls))
#in particular, note the similarity of the mle and min res solutions
#note that the order of factors and the sign of some of factors may differ
#an example of where the ML and PA and MR models differ is found in Thurstone.33.
#compare the first two factors with the 3 factor solution
Thurstone.33 <- as.matrix(Thurstone.33)
mle2 <- fa(Thurstone.33,2,rotate="none",fm="mle")
mle3 <- fa(Thurstone.33,3 ,rotate="none",fm="mle")
pa2 <- fa(Thurstone.33,2,rotate="none",fm="pa")
pa3 <- fa(Thurstone.33,3,rotate="none",fm="pa")
mr2 <- fa(Thurstone.33,2,rotate="none")
mr3 <- fa(Thurstone.33,3,rotate="none")
factor.congruence(list(mle2,mr2,pa2,mle3,pa3,mr3))
#f5 <- fa(bfi[1:25],5)
#f5 #names are not in ascending numerical order (see note)
#colnames(f5$loadings) <- paste("F",1:5,sep="")
#f5
Run the code above in your browser using DataLab