A Robbins-Munro stochastic approximation update is used
to adapt the tuning parameter of the proposal kernel. The
idea is to update the tuning parameter at each iteration
of the sampler: $$h^{(i+1)} = h^{(i)} +
\eta^{(i+1)}(\alpha^{(i)} - \alpha_{opt}),$$
where $h^{(i)}$ and
$\alpha^{(i)}$ are the tuning parameter
and acceptance probability at iteration $i$ and
$\alpha_{opt}$ is a target acceptance
probability. For Gaussian targets, and in the limit as
the dimension of the problem tends to infinity, an
appropriate target acceptance probability for MALA
algorithms is 0.574. The sequence
${\eta^{(i)}}$ is chosen so that
$\sum_{i=0}^\infty\eta^{(i)}$
is infinite whilst
$\sum_{i=0}^\infty\left(\eta^{(i)}\right)^{1+\epsilon}$
is finite for $\epsilon>0$. These two
conditions ensure that any value of $h$ can be
reached, but in a way that maintains the ergodic
behaviour of the chain. One class of sequences with this
property is, $$\eta^{(i)} =
\frac{C}{i^\alpha},$$
where $\alpha\in(0,1]$ and
$C>0$.The scheme is set via the mcmcpars
function.