make.keys
for a convenient way to make the keys file. If the input is a square matrix, then it is assumed that the input is a covariance or correlation matix and scores are not found, but the item statistics are reported. (Similar functionality to cluster.cor
). response.frequencies
reports the frequency of item endorsements fore each response category for polytomous or multiple choice items.score.items(keys, items, totals = FALSE, ilabels = NULL,missing=TRUE, impute="median", min = NULL, max = NULL, digits = 2)
response.frequencies(items,max=10)
make.keys
Various estimates of scale reliability include ``Cronbach's alpha", Guttman's Lambda 6, and the average interitem correlation. For k = number of items in a scale, and av.r = average correlation between items in the scale, alpha = k * av.r/(1+ (k-1)*av.r). Thus, alpha is an increasing function of test length as well as the test homeogeneity.
Surprisingly, 106 years after Spearman (1904) introduced the concept of reliability to psychologists, there are still multiple approaches for measuring it. Although very popular, Cronbach's $\alpha$ (1951) underestimates the reliability of a test and over estimates the first factor saturation.
$\alpha$ (Cronbach, 1951) is the same as Guttman's $\lambda$3 (Guttman, 1945) and may be found by $$\lambda_3 = \frac{n}{n-1}\Bigl(1 - \frac{tr(\vec{V})_x}{V_x}\Bigr) = \frac{n}{n-1} \frac{V_x - tr(\vec{V}_x)}{V_x} = \alpha$$
Perhaps because it is so easy to calculate and is available in most commercial programs, alpha is without doubt the most frequently reported measure of internal consistency reliability. Alpha is the mean of all possible spit half reliabilities (corrected for test length). For a unifactorial test, it is a reasonable estimate of the first factor saturation, although if the test has any microstructure (i.e., if it is ``lumpy") coefficients $\beta$ (Revelle, 1979; see ICLUST
) and $\omega_h$ (see omega
) are more appropriate estimates of the general factor saturation. $\omega_t$ (see omega
) is a better estimate of the reliability of the total test.
Guttman's Lambda 6 (G6) considers the amount of variance in each item that can be accounted for the linear regression of all of the other items (the squared multiple correlation or smc), or more precisely, the variance of the errors, $e_j^2$, and is $$\lambda_6 = 1 - \frac{\sum e_j^2}{V_x} = 1 - \frac{\sum(1-r_{smc}^2)}{V_x} .$$
The squared multiple correlation is a lower bound for the item communality and as the number of items increases, becomes a better estimate.
G6 is also sensitive to lumpyness in the test and should not be taken as a measure of unifactorial structure. For lumpy tests, it will be greater than alpha. For tests with equal item loadings, alpha > G6, but if the loadings are unequal or if there is a general factor, G6 > alpha. Although it is normal when scoring just a single scale to calculate G6 from just those items within the scale, logically it is appropriate to estimate an item reliability from all items available. This is done here and is labeled as G6* to identify the subtle difference.
Alpha and G6* are both positive functions of the number of items in a test as well as the average intercorrelation of the items in the test. When calculated from the item variances and total test variance, as is done here, raw alpha is sensitive to differences in the item variances. Standardized alpha is based upon the correlations rather than the covariances. alpha is a generalization of an earlier estimate of reliability for tests with dichotomous items developed by Kuder and Richardson, known as KR20, and a shortcut approximation, KR21. (See Revelle, in prep).
More complete reliability analyses of a single scale can be done using the omega
function which finds $\omega_h$ and $\omega_t$ based upon a hierarchical factor analysis. Alternative estimates of the Greatest Lower Bound for the reliability are found in the guttman
function.
Alpha is a poor estimate of the general factor saturation of a test (see Revelle and Zinbarg, 2009; Zinbarg et al., 2005) for it can seriously overestimate the size of a general factor, and a better but not perfect estimate of total test reliability because it underestimates total reliability. None the less, it is a useful statistic to report.
Correlations between scales are attenuated by a lack of reliability. Correcting correlations for reliability (by dividing by the square roots of the reliabilities of each scale) sometimes help show structure.
By default, missing values are replaced with the corresponding median value for that item. Means can be used instead (impute="mean"), or subjects with missing data can just be dropped (missing = FALSE). For data with a great deal of missingness, yet another option is to just find the average of the available responses (impute="none"). This is useful for findings means for scales for the SAPA project where most scales are estimated from random sub samples of the items from the scale.
Revelle W. and R.E. Zinbarg. (2009) Coefficients alpha, beta, omega and the glb: comments on Sijtsma. Psychometrika, 74(1):145-154.
Zinbarg, R. E., Revelle, W., Yovel, I. and Li, W. (2005) Cronbach's alpha, Revelle's beta, and McDonald's omega h, Their relations with each other and two alternative conceptualizations of reliability, Psychometrika, 70, 123-133.
make.keys
for a convenient way to create the keys file, score.multiple.choice
for multiple choice items,
alpha
, correct.cor
, cluster.cor
, cluster.loadings
, omega
, guttman
for item/scale analysis.
In addition, the irt.fa
function provides an alternative way of examining the structure of a test and emphasizes item response theory approaches to the information returned by each item and the total test.#see the example including the bfi data set
data(bfi)
keys.list <- list(agree=c(-1,2:5),conscientious=c(6:8,-9,-10),extraversion=c(-11,-12,13:15),neuroticism=c(16:20),openness = c(21,-22,23,24,-25))
keys <- make.keys(28,keys.list,item.labels=colnames(bfi))
scores <- score.items(keys,bfi,min=1,max=6)
summary(scores)
#to get the response frequencies, we need to not use the age variable
scores <- score.items(keys[1:27,],bfi[1:27],min=1,max=6)
scores
#compare this output to that for the impute="none" option.
#first make many of the items missing
missing.bfi <- bfi
missing.bfi[sample(dim(bfi)[1],500),sample(dim(bfi)[2],10)] <- NA
scores <- score.items(keys,missing.bfi,impute="none",min=1,max=6)
summary(scores)
Run the code above in your browser using DataLab