An introduction, analogous to the following one, on the Anderson-Darling test is available on http://en.wikipedia.org/wiki/Anderson-Darling_test.Given a sample $x_i \ (i=1,\ldots,m)$ of data extracted from a distribution $F_R(x)$, the test is used to check the null hypothesis $H_0 : F_R(x) = F(x,\theta)$, where $F(x,\theta)$ is the hypothetical distribution and $\theta$ is an array of parameters estimated from the sample $x_i$.
The Anderson-Darling goodness of fit test measures the departure between the hypothetical distribution $F(x,\theta)$ and the cumulative frequency function $F_m(x)$ defined as:
$$F_m(x) = 0 \ , \ x < x_{(1)}$$
$$F_m(x) = i/m \ , \ x_{(i)} \leq x < x_{(i+1)}$$
$$F_m(x) = 1 \ , \ x_{(m)} \leq x$$
where $x_{(i)}$ is the $i$-th element of the ordered sample (in increasing order).
The test statistic is:
$$Q^2 = m \! \int_x \left[ F_m(x) - F(x,\theta) \right]^2 \Psi(x) \,dF(x)$$
where $\Psi(x)$, in the case of the Anderson-Darling test (Laio, 2004), is $\Psi(x) = [F(x,\theta) (1 - F(x,\theta))]^{-1}$.
In practice, the statistic is calculated as:
$$A^2 = -m -\frac{1}{m} \sum_{i=1}^m \left{ (2i-1)\ln[F(x_{(i)},\theta)] + (2m+1-2i)\ln[1 - F(x_{(i)},\theta)] \right}$$
The statistic $A^2$, obtained in this way, may be confronted with the population of the $A^2$'s that one obtain if samples effectively belongs to the $F(x,\theta)$ hypothetical distribution.
In the case of the test of normality, this distribution is defined (see Laio, 2004).
In other cases, e.g. the Pearson Type III case here, can be derived with a Monte-Carlo procedure.