
Last chance! 50% off unlimited learning
Sale ends in
Calculate the concordance correlation coefficient.
ccc(data, ...)# S3 method for data.frame
ccc(
data,
truth,
estimate,
bias = FALSE,
na_rm = TRUE,
case_weights = NULL,
...
)
ccc_vec(truth, estimate, bias = FALSE, na_rm = TRUE, case_weights = NULL, ...)
A tibble
with columns .metric
, .estimator
,
and .estimate
and 1 row of values.
For grouped data frames, the number of rows returned will be the same as the number of groups.
For ccc_vec()
, a single numeric
value (or NA
).
A data.frame
containing the columns specified by the truth
and estimate
arguments.
Not currently used.
The column identifier for the true results
(that is numeric
). This should be an unquoted column name although
this argument is passed by expression and supports
quasiquotation (you can unquote column
names). For _vec()
functions, a numeric
vector.
The column identifier for the predicted
results (that is also numeric
). As with truth
this can be
specified different ways but the primary method is to use an
unquoted variable name. For _vec()
functions, a numeric
vector.
A logical
; should the biased estimate of variance
be used (as is Lin (1989))?
A logical
value indicating whether NA
values should be stripped before the computation proceeds.
The optional column identifier for case weights. This
should be an unquoted column name that evaluates to a numeric column in
data
. For _vec()
functions, a numeric vector,
hardhat::importance_weights()
, or hardhat::frequency_weights()
.
Max Kuhn
ccc()
is a metric of both consistency/correlation and accuracy,
while metrics such as rmse()
are strictly for accuracy and metrics
such as rsq()
are strictly for consistency/correlation
Lin, L. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45 (1), 255-268.
Nickerson, C. (1997). A note on "A concordance correlation coefficient to evaluate reproducibility". Biometrics, 53(4), 1503-1507.
Other numeric metrics:
huber_loss()
,
huber_loss_pseudo()
,
iic()
,
mae()
,
mape()
,
mase()
,
mpe()
,
msd()
,
poisson_log_loss()
,
rmse()
,
rpd()
,
rpiq()
,
rsq()
,
rsq_trad()
,
smape()
Other consistency metrics:
rpd()
,
rpiq()
,
rsq()
,
rsq_trad()
Other accuracy metrics:
huber_loss()
,
huber_loss_pseudo()
,
iic()
,
mae()
,
mape()
,
mase()
,
mpe()
,
msd()
,
poisson_log_loss()
,
rmse()
,
smape()
# Supply truth and predictions as bare column names
ccc(solubility_test, solubility, prediction)
library(dplyr)
set.seed(1234)
size <- 100
times <- 10
# create 10 resamples
solubility_resampled <- bind_rows(
replicate(
n = times,
expr = sample_n(solubility_test, size, replace = TRUE),
simplify = FALSE
),
.id = "resample"
)
# Compute the metric by group
metric_results <- solubility_resampled %>%
group_by(resample) %>%
ccc(solubility, prediction)
metric_results
# Resampled mean estimate
metric_results %>%
summarise(avg_estimate = mean(.estimate))
Run the code above in your browser using DataLab