In general, the power of the tests is determined from the assumption that the
approximate distributions of the four test statistics are from the family of
noncentral \(\chi^2\) distributions with \(df\) equal to the number of
free item-category parameters and noncentrality parameter \(\lambda\).
The latter depends on a scenario of deviation from the hypothesis to be tested
and a specified sample size. Given the probability of the error of the first
kind \(\alpha\) the power of the tests can be determined from \(\lambda\).
More details about the distributions of the test statistics and the relationship
between \(\lambda\), power, and sample size can be found in Draxler and
Alexandrowicz (2015).
As regards the concept of sample size a distinction between informative and total
sample size has to be made since the power of the tests depends only on the informative
sample size. In the conditional maximum likelihood context, the responses of persons
with minimum or maximum person score are completely uninformative. They do not contribute
to the value of the test statistic. Thus, the informative sample size does not include
these persons. The total sample size is composed of all persons.
In particular, the determination of \(\lambda\) and the power of the tests, respectively,
is based on a simple Monte Carlo approach. Data (responses of a large number of persons
to a number of items) are generated given a user-specified scenario of a deviation from
the hypothesis to be tested. A scenario of a deviation is given by a choice of the
item-cat. parameters and the person parameters (to be drawn randomly from a specified
distribution) for each of the two groups. Such a scenario may be called local deviation
since deviations can be specified locally for each item-category. The relative group
sizes are determined by the choice of the number of person parameters for each of the
two groups. For instance, by default \(10^6\) person parameters are selected randomly for
each group. In this case, it is implicitly assumed that the two groups of persons are
of equal size. The user can specify the relative group sizes by choosing the length of
the arguments persons1 and persons2 appropriately. Note that the relative group sizes
do have an impact on power and sample size of the tests. The next step is to compute a
test statistic \(T\) (Wald, LR, score, or gradient) from the simulated data. The observed
value \(t\) of the test statistic is then divided by the informative sample size
\(n_{infsim}\) observed in the simulated data. This yields the so-called global deviation
\(e = t / n_{infsim}\), i.e., the chosen scenario of a deviation from the hypothesis to
be tested being represented by a single number. The power of the tests can be determined
given a user-specified total sample size denoted by n_total. The noncentrality
parameter \(\lambda\) can then be expressed by
\(\lambda = n_{total}* (n_{infsim} / n_{totalsim}) * e\), where \(n_{totalsim}\) denotes
the total number of persons in the simulated data and \(n_{infsim} / n_{totalsim}\) is
the proportion of informative persons in the sim. data. Let \(q_{1- \alpha}\) be the
\(1 - \alpha\) quantile of the central \(\chi^2\) distribution with df equal to the
number of free item-category parameters. Then,
$$power = 1 - F_{df, \lambda} (q_{1- \alpha}),$$
where \(F_{df, \lambda}\) is the cumulative distribution function of the noncentral
\(\chi^2\) distribution with \(df\) equal to the number of free item-category parameters
and \(\lambda = n_{total} (n_{infsim} / n_{totalsim}) * e\). Thereby, it is assumed that
\(n_{total}\) is composed of a frequency distribution of person scores that is proportional
to the observed distribution of person scores in the simulated data. The same holds
true in respect of the relative group sizes, i.e., the relative frequencies of the two
person groups in a sample of size \(n_{total}\) are assumed to be equal to the relative frequencies of the two
groups in the simulated data.
Note that in this approach the data have to be generated only once. There are no
replications needed. Thus, the procedure is computationally not very time-consuming.
Since \(e\) is determined from the value of the test statistic observed in the simulated
data it has to be treated as a realized value of a random variable \(E\). The same holds
true for \(\lambda\) as well as the power of the tests. Thus, the power is a realized
value of a random variable that shall be denoted by \(P\). Consequently, the (realized)
value of the power of the tests need not be equal to the exact power that follows from the
user-specified \(n_{total}\), \(\alpha\), and the chosen item-category parameters used
for the simulation of the data. If the CML estimates of these parameters computed from the
simulated data are close to the predetermined parameters the power of the tests will be
close to the exact value. This will generally be the case if the number of person parameters
used for simulating the data is large, e.g., \(10^5\) or even \(10^6\) persons. In such cases,
the possible random error of the computation procedure based on the sim. data may not be of
practical relevance any more. That is why a large number (of persons for the simulation process)
is generally recommended.
For theoretical reasons, the random error involved in computing the power of the tests can
be pretty well approximated. A suitable approach is the well-known delta method. Basically,
it is a Taylor polynomial of first order, i.e., a linear approximation of a function.
According to it the variance of a function of a random variable can be linearly approximated
by multiplying the variance of this random variable with the square of the first derivative
of the respective function. In the present problem, the variance of the test statistic \(T\)
is (approximately) given by the variance of a noncentral \(\chi^2\) distribution with \(df\)
equal to the number of free item-category parameters and noncentrality parameter \(\lambda\).
Thus, \(Var(T) = 2 (df + 2 \lambda)\), with \(\lambda = t\). Since the global
deviation \(e = (1 / n_{infsim}) * t\) it follows for the variance of the corresponding random variable \(E\)
that \(Var(E) = (1 / n_{infsim})^2 * Var(T)\).
The power of the tests is a function of \(e\) which is given by \(F_{df, \lambda} (q_{\alpha})\),
where \(\lambda = n_{total} * (n_{infsim} / n_{totalsim}) * e\) and \(df\) equal to the
number of free item-category parameters. Then, by the delta method one obtains (for the variance of P).
$$Var(P) = Var(E) * (F'_{df, \lambda} (q_{\alpha}))^2,$$
where \(F'_{df, \lambda}\) is the derivative of \(F_{df, \lambda}\) with respect to \(e\).
This derivative is determined numerically and evaluated at \(e\) using the package numDeriv. The square root of
\(Var(P)\) is then used to quantify the random error of the suggested Monte Carlo computation
procedure. It is called Monte Carlo error of power.