Computes the Cliff's Delta effect size for ordinal variables with the related confidence interval using efficient algorithms.
cliff.delta(d, ... )
# S3 method for formula cliff.delta(formula, data=list() ,conf.level=.95, use.unbiased=TRUE, use.normal=FALSE, return.dm=FALSE, ...)
# S3 method for default cliff.delta(d, f, conf.level=.95, use.unbiased=TRUE, use.normal=FALSE, return.dm=FALSE, ...)
a numeric vector giving either the data values (if
f is a factor) or the treatment group values (if
f is a numeric vector)
either a factor with two levels or a numeric vector of values (see Detials)
confidence level of the confidence interval
a logical indicating whether to compute the delta's variance using the "unbiased" estimate formula or the "consistent" estimate
logical indicating whether to use the normal or Student-t distribution for the confidence interval estimation
logical indicating whether to return the dominance matrix. Warning: the explicit computation of the dominance uses a sub-optimal algorithm both in terms of memory and time
a formula of the form
y ~ f, where
y is a numeric variable giving the data values and
f a factor with two levels giving the corresponding group
an optional matrix or data frame containing the variables in the formula
formula. By default the variables are taken from
further arguments to be passed to or from methods.
A list of class
effsize containing the following components:
the Cliff's delta estimate
the confidence interval of the delta
the estimated variance of the delta
the confidence level used to compute the confidence interval
the dominance matrix used for computation, only if
return.dm is TRUE
a qualitative assessment of the magnitude of effect size
the method used for computing the effect size, always
the method used to compute the delta variance estimation, either
the distribution used to compute the confidence interval, either
The magnitude is assessed using the thresholds provided in (Romano 2006), i.e. |d|<0.147 "negligible", |d|<0.33 "small", |d|<0.474 "medium", otherwise "large"
Uses the original formula reported in (Cliff 1996).
If the dominance matrix is required i.e.
return.dm=TRUE) the full matrix is computed thus using the naive algorithm.
factors then the optimized linear complexity algorithm is used, otherwise the RLE algorithm (with complexity n log n) is used.
Norman Cliff (1996). Ordinal methods for behavioral data analysis. Routledge.
J. Romano, J. D. Kromrey, J. Coraggio, J. Skowronek, Appropriate statistics for ordinal level data: Should we really be using t-test and cohen's d for evaluating group differences on the NSSE and other surveys?, in: Annual meeting of the Florida Association of Institutional Research, 2006.
K.Y. Hogarty and J.D.Kromrey (1999). Using SAS to Calculate Tests of Cliff's Delta. Proceedings of the Twenty-Foursth Annual SAS User Group International Conference, Miami Beach, Florida, p 238. Available at: http://www2.sas.com/proceedings/sugi24/Posters/p238-24.pdf