psy (version 1.1)

icc: Intraclass correlation coefficient (ICC)

Description

Computes the ICC of several series of measurements, for example in an interrater agreement study. Two types of ICC are proposed: consistency and agreement.

Usage

icc(data)

Arguments

data

n*p matrix or dataframe, n subjects p raters

Value

A list with :

$nb.subjects

number of subjects under study

$nb.raters

number of raters

$subject.variance

subject variance

$rater.variance

rater variance

$residual

residual variance

$icc.consistency

Intra class correlation coefficient, "consistency" version

$icc.agreement

Intra class correlation coefficient, "agreement" version

Details

Missing data are omitted in a listwise way. The "agreement" ICC is the ratio of the subject variance by the sum of the subject variance, the rater variance and the residual; it is generally prefered. The "consistency" version is the ratio of the subject variance by the sum of the subject variance and the residual; it may be of interest when estimating the reliability of pre/post variations in measurements.

References

Shrout, P.E., Fleiss, J.L. (1979), Intraclass correlation: uses in assessing rater reliability, Psychological Bulletin, 86, 420-428.

Examples

Run this code
# NOT RUN {
data(expsy)
icc(expsy[,c(12,14,16)])

#to obtain a 95%confidence interval:
#library(boot)
#icc.boot <- function(data,x) {icc(data[x,])[[7]]}
#res <- boot(expsy[,c(12,14,16)],icc.boot,1000)
#quantile(res$t,c(0.025,0.975))  # two-sided bootstrapped confidence interval of icc (agreement)
#boot.ci(res,type="bca")         # adjusted bootstrap percentile (BCa) confidence interval (better)
# }

Run the code above in your browser using DataLab