This is an algorithm proposed in Fay and Brittain (2022, Chapter 20). Here are the details of the algorithm. For step 1, we pick a starting sample size, say $N_1$, and the number of replications within a batch, $m$,
and the total number of batches, $b_tot$.
We simulate $m$ data sets with sample size $N_1$, and get the proportion of rejections, say $P_1$.
Then we use a normal approximation to estimate the target sample size, say $N_norm$. In step 2, we replicate $m$ data sets with sample size $N_2 = N_norm$
to get the associated proportion of rejections, say $P_2$. We repeat 2 more batches with $N_3=N_norm/2$ and $N_4=2 N_norm$,
to get proportions $P_3$, and $P_4$. Then in step 3, we use isotonic regression (which forces monotonicity, power to be non-decreasing with sample size) on the 4 observed pairs ($(N_1,P_1),...,(N_4,P_4)$), and linear interpolation to get our best estimate of
the sample size at the target power, $N_target$. We use that estimate of $N_target$ for our sample size for the next
batch of simulations. This idea is of using the best estimate of the target for the next iteration is studied in
Wu (1985, see Section 3). Step 4 is iterative, for the $i$th batch we repeat the isotonic regression, except now with $N_i$ estimated from the first $(i-1)$ observation pairs. We repeat step 4 until either the number of batches is $b_tot$,
or the current sample size estimate is the same as the last nrepeatSwitch-1 estimates, in which case we switch to an up-and-down-like method.
For each iteration of the up-and-down-like method, if the current proportion of rejections from the last batch of $m$ replicates is greater than the target power, then subtract 1 from the
current sample size estimate, otherwise add 1. Continue with that up-and-down-like method until we reach the number of batches equal to $b_tot$. The up-and-down-like method was added because sometimes the algorithm would get stuck in too large of a sample size estimate.