`principal(r, nfactors = 1, residuals = FALSE,rotate="varimax",n.obs=NA, scores=FALSE,missing=FALSE,impute="median",oblique.scores=TRUE)`

r

a correlation matrix. If a raw data matrix is used, the correlations will be found using pairwise deletions for missing values.

nfactors

Number of components to extract

residuals

FALSE, do not show residuals, TRUE, report residuals

rotate

"none", "varimax", "quatimax", "promax", "oblimin", "simplimax", and "cluster" are possible rotations/transformations of the solution.

n.obs

Number of observations used to find the correlation matrix if using a correlation matrix. Used for finding the goodness of fit statistics.

scores

If TRUE, find component scores

missing

if scores are TRUE, and missing=TRUE, then impute missing values using either the median or the mean

impute

"median" or "mean" values are used to replace missing values

oblique.scores

If TRUE (default), then the component scores are based upon the structure matrix. If FALSE, upon the pattern matrix.

values Eigen Values of all components -- useful for a scree plot rotation which rotation was requested? n.obs number of observations specified or found communality Communality estimates for each item. These are merely the sum of squared factor loadings for that item. loadings A standard loading matrix of class ``loadings" fit Fit of the model to the correlation matrix fit.off how well are the off diagonal elements reproduced? residual Residual matrix -- if requested dof Degrees of Freedom for this model. This is the number of observed correlations minus the number of independent parameters (number of items * number of factors - nf*(nf-1)/2. That is, dof = niI * (ni-1)/2 - ni * nf + nf*(nf-1)/2. objective value of the function that is minimized by maximum likelihood procedures. This is reported for comparison purposes and as a way to estimate chi square goodness of fit. The objective function is $f = log(trace ((FF'+U2)^{-1} R) - log(|(FF'+U2)^{-1} R|) - n.items$. Because components do not minimize the off diagonal, this fit will be not as good as for factor analysis. STATISTIC If the number of observations is specified or found, this is a chi square based upon the objective function, f. Using the formula from `factanal`

: $\chi^2 = (n.obs - 1 - (2 * p + 5)/6 - (2 * factors)/3)) * f$PVAL If n.obs > 0, then what is the probability of observing a chisquare this large or larger? phi If oblique rotations (using oblimin from the GPArotation package) are requested, what is the interfactor correlation. scores If scores=TRUE, then estimates of the factor scores are reported weights The beta weights to find the principal components from the data R2 The multiple R square between the factors and factor score estimates, if they were to be found. (From Grice, 2001) For components, these are of course 1.0. valid The correlations of the component score estimates with the components, if they were to be found and unit weights were used. (So called course coding).

There are a number of data reduction techniques including principal components and factor analysis. Both PC and FA attempt to approximate a given correlation or covariance matrix of rank n with matrix of lower rank (p). $_nR_n \approx _{n}F_{kk}F_n'+ U^2$ where k is much less than n. For principal components, the item uniqueness is assumed to be zero and all elements of the correlation matrix are fitted. That is, $_nR_n \approx _{n}F_{kk}F_n'$ The primary empirical difference between a components versus a factor model is the treatment of the variances for each item. Philosophically, components are weighted composites of observed variables while in the factor model, variables are weighted composites of the factors.

For a n x n correlation matrix, the n principal components completely reproduce the correlation matrix. However, if just the first k principal components are extracted, this is the best k dimensional approximation of the matrix.

It is important to recognize that rotated principal components are not principal components (the axes associated with the eigen value decomposition) but are merely components. To point this out, unrotated principal components are labelled as PCi, while rotated PCs are now labeled as RCi (for rotated components) and obliquely transformed components as TCi (for transformed components). (Thanks to Ulrike Gromping for this suggestion.)

Rotations and transformations are either part of psych (Promax and cluster), of base R (varimax), or of GPArotation (simplimax, quartimax, oblimin).

Some of the statistics reported are more appropriate for (maximum likelihood) factor analysis rather than principal components analysis, and are reported to allow comparisons with these other models.

Although for items, it is typical to find component scores by scoring the salient items (using, e.g., `score.items`

) component scores are found by regression where the regression weights are $R^{-1} \lambda$ where $\lambda$ is the matrix of component loadings. The regression approach is done to be parallel with the factor analysis function `fa`

. The regression weights are found from the inverse of the correlation matrix times the component loadings. This has the result that the component scores are standard scores (mean=0, sd = 1) of the standardized input. A comparison to the scores from `princomp`

shows this difference. princomp does not, by default, standardize the data matrix, nor are the components themselves standardized. By default, the regression weights are found from the Structure matrix, not the Pattern matrix.

Revelle, W. An introduction to psychometric theory with applications in R (in prep) Springer. Draft chapters available at

`VSS`

(to test for the number of components or factors to extract), `VSS.scree`

and `fa.parallel`

to show a scree plot and compare it with random resamplings of the data), `factor2cluster`

(for course coding keys), `fa`

(for factor analysis), `factor.congruence`

(to compare solutions)```
#Four principal components of the Harmon 24 variable problem
#compare to a four factor principal axes solution using factor.congruence
pc <- principal(Harman74.cor$cov,4,rotate="varimax")
mr <- fa(Harman74.cor$cov,4,rotate="varimax") #minres factor analysis
pa <- fa(Harman74.cor$cov,4,rotate="varimax",fm="pa") # principal axis factor analysis
round(factor.congruence(list(pc,mr,pa)),2)
```

Run the code above in your browser using DataLab