The null hypothesis is that the data come from a population with
independent and identically distributed realizations. The one-sided
alternative hypothesis is that the (weighted) number of records is
greater (or less) than under the null hypothesis. The
(weighted)-number-of-records statistic is calculated according to:
$$N_{..}^\omega = \sum_{m=1}^M \sum_{t=1}^T \omega_t I_{tm},$$
where \(\omega_t\) are weights given to the different records
according to their position in the series and \(I_{tm}\) are the record
indicators (see I.record
).
The statistic \(N_{..}^\omega\) is exact Poisson binomial distributed
when the \(\omega_t\)'s only take values in \(\{0,1\}\). In any case,
it is also approximately normally distributed, with
$$Z = \frac{N_{..}^\omega - \mu}{\sigma},$$
where its mean and variance are
$$\mu = M \sum_{t=1}^T \omega_t \frac{1}{t},$$
$$\sigma^2 = M \sum_{t=2}^T \omega_t^2 \frac{1}{t} \left(1-\frac{1}{t}\right).$$
If correct = TRUE
, then a continuity correction will be employed:
$$Z = \frac{N_{..}^\omega \pm 0.5 - \mu}{\sigma},$$
with ``\(-\)'' if the alternative is greater and ``\(+\)'' if the
alternative is less.
When \(M>1\), the expression of the variance under the null hypothesis
can be substituted by the sample variance in the \(M\) series,
\(\hat{\sigma}^2\). In this case, the statistic \(N_{S,..}^\omega\)
is asymptotically \(t\) distributed, which is a more robust alternative
against serial correlation.
If simulate.p.value = TRUE
, the p-value is estimated by Monte Carlo
simulations.
The size of the tests is adequate for any values of \(T\) and \(M\).
Some comments and a power study are given by Cebri<U+00E1>n, Castillo-Mateo and
As<U+00ED>n (2021).