Learn R Programming

CompareTests (version 1.3)

Correct for Verification Bias in Diagnostic Accuracy & Agreement

Description

A standard test is observed on all specimens. We treat the second test (or sampled test) as being conducted on only a stratified sample of specimens. Verification Bias is this situation when the specimens for doing the second (sampled) test is not under investigator control. We treat the total sample as stratified two-phase sampling and use inverse probability weighting. We estimate diagnostic accuracy (category-specific classification probabilities; for binary tests reduces to specificity and sensitivity, and also predictive values) and agreement statistics (percent agreement, percent agreement by category, Kappa (unweighted), Kappa (quadratic weighted) and symmetry tests (reduces to McNemar's test for binary tests)). See: Katki HA, Li Y, Edelstein DW, Castle PE. Estimating the agreement and diagnostic accuracy of two diagnostic tests when one test is conducted on only a subsample of specimens. Stat Med. 2012 Feb 28; 31(5) .

Copy Link

Version

Install

install.packages('CompareTests')

Monthly Downloads

210

Version

1.3

License

GPL-3

Maintainer

Hormuzd A Katki

Last Published

December 12th, 2024

Functions in CompareTests (1.3)

CompareTests

Correct for Verification Bias in Diagnostic Accuracy & Agreement
specimens

Fictitious data on specimens tested by two methods
fulltable

fulltable attaches margins and NA/NaN category to the output of table()
CompareTests-package

Correct for Verification Bias in Diagnostic Accuracy & Agreement