raters-package: Inter rater agreement among a set of raters
Description
Computes a statistic as an index of inter-rater agreement among a set of raters.This procedure is based on a statistic not affected by Kappa paradoxes.
It is also possible to evaluate if the agreement is nil using the test argument.
The p value can be approximated using the Normal, Chi-squared distribution or
using Monte Carlo algorithm. Fleiss' Kappa is also shown.
Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin 76, 378-382.
Falotico, R. Quatto, P. (2010). On avoiding paradoxes in assessing inter-rater agreement. Italian Journal of Applied Statistics 22, 151-160.