lillie.test

0th

Percentile

Lilliefors (Kolmogorov-Smirnov) test for normality

Performs the Lilliefors (Kolmogorov-Smirnov) test for the composite hypothesis of normality, see e.g. Thode (2002, Sec. 5.1.1).

Keywords
htest
Usage
lillie.test(x)
Arguments
x
a numeric vector of data values, the number of which must be greater than 4. Missing values are allowed.
Details

The Lilliefors (Kolmogorov-Smirnov) test is an EDF omnibus test for the composite hypothesis of normality. The test statistic is the maximal absolute difference between empirical and hypothetical cumulative distribution function. It may be computed as $D=\max{D^{+}, D^{-}}$ with $$D^{+} = \max_{i=1,\ldots, n}{i/n - p_{(i)}}, D^{-} = \max_{i=1,\ldots, n}{p_{(i)} - (i-1)/n},$$ where $p_{(i)} = \Phi([x_{(i)} - \overline{x}]/s)$. Here, $\Phi$ is the cumulative distribution function of the standard normal distribution, and $\overline{x}$ and $s$ are mean and standard deviation of the data values. The p-value is computed from the Dallal-Wilkinson (1986) formula, which is claimed to be only reliable when the p-value is smaller than 0.1. If the Dallal-Wilkinson p-value turns out to be greater than 0.1, then the p-value is computed from the distribution of the modified statistic $Z=D (\sqrt{n}-0.01+0.85/\sqrt{n})$, see Stephens (1974), the actual p-value formula being obtained by a simulation and approximation process.

Value

• A list with class htest containing the following components:
• statisticthe value of the Lilliefors (Kolomogorv-Smirnov) statistic.
• p.valuethe p-value for the test.
• methodthe character string Lilliefors (Kolmogorov-Smirnov) normality test.
• data.namea character string giving the name(s) of the data.

Note

The Lilliefors (Kolomorov-Smirnov) test is the most famous EDF omnibus test for normality. Compared to the Anderson-Darling test and the Cramer-von Mises test it is known to perform worse. Although the test statistic obtained from lillie.test(x) is the same as that obtained from ks.test(x, "pnorm", mean(x), sd(x)), it is not correct to use the p-value from the latter for the composite hypothesis of normality (mean and variance unknown), since the distribution of the test statistic is different when the parameters are estimated. The function call lillie.test(x) essentially produces the same result as the S-PLUS function call ks.gof(x) with the distinction that the p-value is not set to 0.5 when the Dallal-Wilkinson approximation yields a p-value greater than 0.1. (Actually, the alternative p-value approximation is provided for the complete range of test statistic values, but is only used when the Dallal-Wilkinson approximation fails.)

References

Dallal, G.E. and Wilkinson, L. (1986): An analytic approximation to the distribution of Lilliefors' test for normality. The American Statistician, 40, 294--296. Stephens, M.A. (1974): EDF statistics for goodness of fit and some comparisons. Journal of the American Statistical Association, 69, 730--737. Thode Jr., H.C. (2002): Testing for Normality. Marcel Dekker, New York.

shapiro.test for performing the Shapiro-Wilk test for normality. ad.test, cvm.test, pearson.test, sf.test for performing further tests for normality. qqnorm for producing a normal quantile-quantile plot.

• lillie.test
Examples
lillie.test(rnorm(100, mean = 5, sd = 3))
lillie.test(runif(100, min = 2, max = 4))
Documentation reproduced from package nortest, version 1.0-4, License: GPL (>= 2)

Community examples

Looks like there are no examples yet.