MCTab
ObjectsProvides a concise summary of the content of MCTab
objects. Computes
sensitivity, specificity, positive and negative predictive values and positive
and negative likelihood ratios for a diagnostic test with reference/gold standard.
Computes positive/negative percent agreement, overall percent agreement and Kappa
when the new test is evaluated by comparison to a non-reference standard. Computes
average positive/negative agreement when the both tests are all not the
reference, such as paired reader precision.
getAccuracy(object, ...)# S4 method for MCTab
getAccuracy(
object,
ref = c("r", "nr", "bnr"),
alpha = 0.05,
r_ci = c("wilson", "wald", "clopper-pearson"),
nr_ci = c("wilson", "wald", "clopper-pearson"),
bnr_ci = "bootstrap",
bootCI = c("perc", "norm", "basic", "stud", "bca"),
nrep = 1000,
rng.seed = NULL,
digits = 4,
...
)
A data frame contains the qualitative diagnostic accuracy criteria with three columns for estimated value and confidence interval.
sens: Sensitivity refers to how often the test is positive when the condition of interest is present.
spec: Specificity refers to how often the test is negative when the condition of interest is absent.
ppv: Positive predictive value refers to the percentage of subjects with a positive test result who have the target condition.
npv: Negative predictive value refers to the percentage of subjects with a negative test result who do not have the target condition.
plr: Positive likelihood ratio refers to the probability of true positive rate divided by the false negative rate.
nlr: Negative likelihood ratio refers to the probability of false positive rate divided by the true negative rate.
ppa: Positive percent agreement, equals to sensitivity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.
npa: Negative percent agreement, equals to specificity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.
opa: Overall percent agreement.
kappa: Cohen's kappa coefficient to measure the level of agreement.
apa: Average positive agreement refers to the positive agreements and can be regarded as weighted ppa.
ana: Average negative agreement refers to the negative agreements and can be regarded as weighted npa.
(MCTab
)
input from diagTab function to create 2x2 contingency table.
other arguments to be passed to DescTools::BinomCI.
(character
)
reference condition. It is possible to choose one
condition for your require. The r
indicates that the comparative test is standard
reference, nr
indicates the comparative test is not a standard reference, and
bnr
indicates both the new test and comparative test are not references.
(numeric
)
type-I-risk, \(\alpha\).
(string
)
string specifying which method to calculate the
confidence interval for a diagnostic test with reference/gold standard. Default
is wilson
. Options can be wilson
, wald
and clopper-pearson
, see DescTools::BinomCI.
(string
)
string specifying which method to calculate the
confidence interval for the comparative test with non-reference standard. Default
is wilson
. Options can be wilson
, wald
and clopper-pearson
, see DescTools::BinomCI.
(string
)
string specifying which method to calculate the
confidence interval for both tests are not reference like reader precision. Default
is bootstrap
. But when the point estimate of ANA
or APA
is equal to 0 or 100%,
the method will be changed to transformed wilson
.
(string
)
string specifying the which bootstrap confidence
interval from boot.ci()
function in boot
package. Default is
perc
(bootstrap percentile), options can be norm
(normal approximation),
boot
(basic bootstrap), stud
(studentized bootstrap) and bca
(adjusted
bootstrap percentile).
(integer
)
number of replicates for bootstrapping, default is 1000.
(integer
)
number of the random number generator seed
for bootstrap sampling. If set to NULL currently in the R session used RNG
setting will be used.
(integer
)
the desired number of digits. Default is 4.
# For qualitative performance
data("qualData")
tb <- qualData %>%
diagTab(
formula = ~ CandidateN + ComparativeN,
levels = c(1, 0)
)
getAccuracy(tb, ref = "r")
getAccuracy(tb, ref = "nr", nr_ci = "wilson")
# For Between-Reader precision performance
data("PDL1RP")
reader <- PDL1RP$btw_reader
tb2 <- reader %>%
diagTab(
formula = Reader ~ Value,
bysort = "Sample",
levels = c("Positive", "Negative"),
rep = TRUE,
across = "Site"
)
getAccuracy(tb2, ref = "bnr")
getAccuracy(tb2, ref = "bnr", rng.seed = 12306)
Run the code above in your browser using DataLab