Learn R Programming

raters (version 1.0)

raters-package: Inter rater agreement among a set of raters

Description

Computes a statistic as an index of inter-rater agreement among a set of raters.This procedure is based on a statistic not affected by Kappa paradoxes. It is also possible to evaluate if the agreement is nil using the test argument. The p value can be approximated using the Normal, Chi-squared distribution or using Monte Carlo algorithm. Fleiss' Kappa is also shown.

Arguments

Details

ll{ Package: raters Type: Package Version: 1.0 Date: 2014-01-31 License: GPL-2 }

References

Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin 76, 378-382. Falotico, R. Quatto, P. (2010). On avoiding paradoxes in assessing inter-rater agreement. Italian Journal of Applied Statistics 22, 151-160.

Examples

Run this code
data(diagnostic)
concordance(diagnostic,test="Normal")

Run the code above in your browser using DataLab