
Last chance! 50% off unlimited learning
Sale ends in
Functions to compute effect size measures for ANOVAs, such as Eta, Omega and Epsilon squared,
and Cohen's f (or their partialled versions) for aov
, aovlist
and anova
models. These indices represent an estimate of how much variance in the response variables
is accounted for by the explanatory variable(s).
Effect sizes are computed using the sums of squares obtained from anova(model)
which
might not always be appropriate (Yeah... ANOVAs are hard...). See details.
eta_squared(model, partial = TRUE, ci = 0.9, ...)omega_squared(model, partial = TRUE, ci = 0.9, ...)
epsilon_squared(model, partial = TRUE, ci = 0.9, ...)
cohens_f(model, partial = TRUE, ci = 0.9, ...)
A model, ANOVA object, or the result of parameters::model_parameters
.
If TRUE
, return partial indices.
Confidence Interval (CI) level
Arguments passed to or from other methods (ignored).
A data frame with the effect size(s) and confidence interval(s).
A data frame containing the effect size values and their confidence intervals.
For aov
and aovlist
models, the effect sizes are computed directly with
Sums-of-Squares. For all other model, the model is passed to anova()
, and effect
sizes are approximated via test statistic conversion (see [F_to_eta2] for more details.
)
The sums of squares (or F statistics) used for the computation of the effect sizes is
based on those returned by anova(model)
(whatever those may be - for aov
and aovlist
these are type-1 sums of squares; for merMod
these are
type-3 sums of squares). Make sure these are the sums of squares you are intrested
in (you might want to pass the result of car::Anova(mode, type = 3)
).
It is generally recommended to fit models with contr.sum
factor weights and
centered covariates, for sensible results. See examples.
Confidence intervals are estimated using the Noncentrality parameter method;
These methods searches for a the best ncp
(non-central parameters) for
of the noncentral F distribution for the desired tail-probabilities,
and then convert these ncp
s to the corresponding effect sizes.
Special care should be taken when interpreting CIs with a lower bound equal to (or small then) 0, and even more care should be taken when the upper bound is equal to (or small then) 0 (Steiger, 2004; Morey et al., 2016).
Both Omega and Epsilon are unbiased estimators of the population's Eta, which is especially important is small samples. But which to choose?
Though Omega is the more popular choice (Albers \& Lakens, 2018), Epsilon is analogous to adjusted R2 (Allen, 2017, p. 382), and has been found to be less biased (Carroll & Nordholm, 1975).
Cohen's f can take on values between zero, when the population means are all equal, and an indefinitely large number as standard deviation of means increases relative to the average standard deviation within each group. Cohen has suggested that the values of 0.10, 0.25, and 0.40 represent small, medium, and large effect sizes, respectively.
Albers, C., \& Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of experimental social psychology, 74, 187-195.
Allen, R. (2017). Statistics and Experimental Design for Psychologists: A Model Comparison Approach. World Scientific Publishing Company.
Carroll, R. M., & Nordholm, L. A. (1975). Sampling Characteristics of Kelley's epsilon and Hays' omega. Educational and Psychological Measurement, 35(3), 541-554.
Kelley, T. (1935) An unbiased correlation ratio measure. Proceedings of the National Academy of Sciences. 21(9). 554-559.
Morey, R. D., Hoekstra, R., Rouder, J. N., Lee, M. D., & Wagenmakers, E. J. (2016). The fallacy of placing confidence in confidence intervals. Psychonomic bulletin & review, 23(1), 103-123.
Steiger, J. H. (2004). Beyond the F test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods, 9, 164-182.
# NOT RUN {
library(effectsize)
mtcars$am_f <- factor(mtcars$am)
mtcars$cyl_f <- factor(mtcars$cyl)
model <- aov(mpg ~ am_f * cyl_f, data = mtcars)
eta_squared(model)
omega_squared(model)
epsilon_squared(model)
cohens_f(model)
(etas <- eta_squared(model, partial = FALSE))
if(require(see)) plot(etas)
model <- aov(mpg ~ cyl_f * am_f + Error(vs / am_f), data = mtcars)
epsilon_squared(model)
# Recommended:
# Type-3 effect sizes + effects coding
if (require(car, quietly = TRUE)) {
contrasts(mtcars$am_f) <- contr.sum
contrasts(mtcars$cyl_f) <- contr.sum
model <- aov(mpg ~ am_f * cyl_f, data = mtcars)
model_anova <- car::Anova(model, type = 3)
eta_squared(model_anova)
}
if (require("parameters")) {
model <- lm(mpg ~ wt + cyl, data = mtcars)
mp <- model_parameters(model)
eta_squared(mp)
}
if (require(lmerTest, quietly = TRUE)) {
model <- lmer(mpg ~ am_f * cyl_f + (1|vs), data = mtcars)
omega_squared(model)
}
# }
# NOT RUN {
# }
Run the code above in your browser using DataLab