coll.test(rand, lenSample = 2^14, nbCell = 2^20, nbSample = 1000, echo = TRUE, ...)
runif
.TRUE
statistic
the value of the chi-squared statistic.
p.value
the p-value of the test.
observed
the observed counts.
expected
the expected counts under the null hypothesis.
residuals
the Pearson residuals, (observed - expected) / sqrt(expected).
rand
.
Let us denote by $n$ the length of samples (i.e. lenSample
argument),
$k$ the number of cells (i.e. nbCell
argument) and
$m$ the number of samples (i.e. nbSample
argument).
A collision is defined as when a random number falls in a cell where there are already random numbers. Let us note $C$ the number of collisions
The distribution of collision number $C$ is given by $$P(C = c) = \prod_{i=0}^{n-c-1}\frac{k-i}{k} \frac{1}{k^c} \,_2S_n^{n-c},$$ where $\,_2S_n^k$ denotes the Stirling number of the second kind and $c=0,\dots,n-1$.
But we cannot use this formula for large $n$ since the Stirling number need $O(n\log(n))$ time to be computed. We use a Gaussian approximation if $\frac{n}{k}>\frac{1}{32}$ and $n\geq 2^8$, a Poisson approximation if $\frac{n}{k} < \frac{1}{32}$ and the exact formula otherwise. Finally we compute $m$ samples of random numbers, on which we calculate the number of collisions. Then we are able to compute a chi-squared statistic.
L'Ecuyer P. (2001), Software for uniform random number generation distinguishing the good and the bad. Proceedings of the 2001 Winter Simulation Conference. (available online)
L'Ecuyer P. (2007), Test U01: a C library for empirical testing of random number generators. ACM Trans. on Mathematical Software 33(4), 22.
freq.test
, serial.test
, poker.test
,
order.test
and gap.test
ks.test
for the Kolmogorov Smirnov test and acf
for
the autocorrelation function.
# (1) poisson approximation
#
coll.test(runif)
# (2) exact distribution
#
coll.test(SFMT, 2^7, 2^10, 10000)
Run the code above in your browser using DataLab