Computes Light's Kappa as an index of interrater agreement between m raters on categorical data.
Usage
kappam.light(ratings)
Arguments
ratings
n*m matrix or dataframe, n subjects m raters.
Value
A list with class '"irrlist"' containing the following components:
$methoda character string describing the method applied for the computation of interrater reliability.
$subjectsthe number of subjects examined.
$ratersthe number of raters.
$irr.namea character string specifying the name of the coefficient.
$valuevalue of Kappa.
$stat.namea character string specifying the name of the corresponding test statistic.
$statisticthe value of the test statistic.
$p.valuethe p-value for the test.
Details
Missing data are omitted in a listwise way.
\crLight's Kappa equals the average of all possible combinations of bivariate Kappas between raters.
References
Conger, A.J. (1980). Integration and generalisation of Kappas for multiple raters. Psychological Bulletin, 88, 322-328.
\crLight, R.J. (1971). Measures of response agreement for qualitative data: Some generalizations and alternatives. Psychological Bulletin, 76, 365-377.