read.clipboard
), or taken as output from the factor2cluster
or make.keys
functions. Similar functionality to scoreItems
which also gives item by cluster correlations.cluster.cor(keys, r.mat, correct = TRUE,SMC=TRUE,item.smc=NULL,impute=TRUE)
scoreOverlap(keys, r, correct = TRUE, SMC = TRUE, av.r = TRUE, item.smc = NULL,
impute = TRUE)
scoreItems
which is probably preferred unless the keys are overlapping.In the case of overlapping keys, (items being scored on multiple scales), scoreOverlap
will adjust for this overlap by replacing the overlapping covariances (which are variances when overlapping) with the corresponding best estimate of an item's ``true" variance using either the average correlation or the smc estimate for that item. This parallels the operation done when finding alpha reliability. This is similar to ideas suggested by Cureton (1966) and Bashaw and Anderson (1966) but uses the smc or the average interitem correlation (default).
A typical use in the SAPA project is to form item composites by clustering or factoring (see fa
, ICLUST
, principal
), extract the clusters from these results (factor2cluster
), and then form the composite correlation matrix using cluster.cor
. The variables in this reduced matrix may then be used in multiple correlatin procedures using mat.regress
.
The original correlation is pre and post multiplied by the (transpose) of the keys matrix.
If some correlations are missing from the original matrix this will lead to missing values (NA) for scale intercorrelations based upon those lower level correlations. If impute=TRUE (the default), a warning is issued and the correlations are imputed based upon the average correlations of the non-missing elements of each scale.
Because the alpha estimate of reliability is based upon the correlations of the items rather than upon the covariances, this estimate of alpha is sometimes called ``standardized alpha". If the raw items are available, it is useful to compare standardized alpha with the raw alpha found using scoreItems
. They will differ substantially only if the items differ a great deal in their variances.
scoreOverlap
answers an important question when developing scales and related subscales, or when comparing alternative versions of scales. For by removing the effect of item overlap, it gives a better estimate the relationship between the latent variables estimated by the observed sum (mean) scores.
Cureton, E. (1966). Corrected item-test correlations. Psychometrika, 31(1):93-96.
factor2cluster
, mat.regress
, alpha
, and most importantly, scoreItems
, which will do all of what cluster.cor does for most users. cluster.cor is an important helper function for iclust
data(attitude)
keys <- make.keys(attitude,list(first=1:3,second=4:7))
r.mat <- cor(attitude)
cluster.cor(keys,r.mat)
#compare this to the correlations correcting for item overlap
overlapping.keys <- make.keys(attitude,list(all=1:7,first=1:3,second=4:7,first2 = 1:3))
cluster.cor(overlapping.keys,r.mat) #unadjusted correlations
scoreOverlap(overlapping.keys,attitude) #adjusted correlations
Run the code above in your browser using DataLab