Computes the Kappa Statistic for agreement between Two Raters, performs Hypothesis tests and calculates Confidence Intervals.
epiKappa(C, alpha=0.05, k0=0.4, digits=3)
An nxn classification matrix or matrix of proportions.
The Null hypothesis, kappa0 = k0
The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals
Number of Digits to round calculations
The computation of the kappa statistic.
The standard error computed under H0
The standard error as computed for Confidence Intervals
Lower Confidence Limit for \(\kappa\)
Upper Confidence Limit for \(\kappa\)
Hypothesis Test Statistic, \(\kappa = K0\) = K0 vs. \(\kappa > K0\)
P-Value for hypothesis test
Returns the original matrix of agreement.
The Null hypothesis, kappa = k0
The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals
Number of Digits to round calculations
The Kappa statistic is used to measure agreement between two raters. For simplicity, consider the case where each rater can classify an object as Type I, or Type II. Then, the diagonal elements of a 2x2 matrix are the agreeing elements, that is where both raters classify an object as Type I or Type II. The discordant observations are on the off-diagonal. Note that the alternative hypothesis is always greater then, as we are interested in whether kappa exceeds a certain threshold, such as 0.4, for Fair agreement.
Szklo M and Nieto FJ. Epidemiology: Beyond the Basics, Jones and Bartlett: Boston, 2007.
Fleiss J. Statistical Methods for Rates and Proportions, 2nd ed. New York: John Wiley and Sons; 1981.
# NOT RUN {
X <- cbind(c(28,5), c(4,61));
summary(epiKappa(X, alpha=0.05, k0 = 0.6));
# }
Run the code above in your browser using DataLab