The correlation is calculated using stats::cor.test
.
effect_metrics_one_cor(
data,
col,
cross,
method = "pearson",
adjust = "fdr",
labels = TRUE,
clean = TRUE,
...
)
A volker table containing the requested statistics.
If method = "pearson"
:
R-squared: Coefficient of determination.
n: Number of cases the calculation is based on.
Pearson's r: Correlation coefficient.
ci low / ci high: Lower and upper bounds of the 95% confidence interval.
df: Degrees of freedom.
t: t-statistic.
p: p-value for the statistical test, indicating whether the correlation differs from zero.
stars: Significance stars based on the p-value (*, **, ***).
If method = "spearman"
:
Spearman's rho is displayed instead of Pearson's r.
S-statistic is used instead of the t-statistic.
A tibble.
The column holding metric values.
The column holding metric values to correlate.
The output metrics, TRUE or pearson = Pearson's R, spearman = Spearman's rho.
Performing multiple significance tests inflates the alpha error.
Thus, p values need to be adjusted according to the number of tests.
Set a method supported by stats::p.adjust
,
e.g. "fdr" (the default) or "bonferroni". Disable adjustment with FALSE.
If TRUE (default) extracts labels from the attributes, see codebook.
Prepare data by data_clean.
Placeholder to allow calling the method with unused parameters from effect_metrics.
library(volker)
data <- volker::chatgpt
effect_metrics_one_cor(data, sd_age, use_private, metric = TRUE)
Run the code above in your browser using DataLab