The functions evidence_CTI
and evidence_CTI_CF
can be used to improve upon the thermodynamic integration (TI) estimate of the normalising constant with ZV-CV and CF, respectively. The functions evidence_SMC
and evidence_SMC_CF
do the same thing for the sequential Monte Carlo (SMC) normalising constant identity.
evidence_CTI(
samples,
loglike,
der_loglike,
der_logprior,
temperatures,
temperatures_all,
most_recent,
est_inds,
options,
folds = 5
)evidence_CTI_CF(
samples,
loglike,
der_loglike,
der_logprior,
temperatures,
temperatures_all,
most_recent,
est_inds,
steinOrder,
kernel_function,
sigma_list,
folds = 5
)
evidence_SMC(
samples,
loglike,
der_loglike,
der_logprior,
temperatures,
temperatures_all,
most_recent,
est_inds,
options,
folds = 5
)
evidence_SMC_CF(
samples,
loglike,
der_loglike,
der_logprior,
temperatures,
temperatures_all,
most_recent,
est_inds,
steinOrder,
kernel_function,
sigma_list,
folds = 5
)
The function evidence_CTI
returns a list, containing the following components:
log_evidence_PS1
: The 1st order quadrature estimate for the log normalising constant
log_evidence_PS2
: The 2nd order quadrature estimate for the log normalising constant
regression_LL
: The set of \(tau\) zvcv
type returns for the 1st order quadrature expectations
regression_vLL
: The set of \(tau\) zvcv
type returns for the 2nd order quadrature expectations
The function evidence_CTI_CF
returns a list, containing the following components:
log_evidence_PS1
: The 1st order quadrature estimate for the log normalising constant
log_evidence_PS2
: The 2nd order quadrature estimate for the log normalising constant
regression_LL
: The set of \(tau\) CF_crossval
type returns for the 1st order quadrature expectations
regression_vLL
: The set of \(tau\) CF_crossval
type returns for the 2nd order quadrature expectations
selected_LL_CF
: The set of \(tau\) selected tuning parameters from sigma_list
for the 1st order quadrature expectations.
selected_vLL_CF
: The set of \(tau\) selected tuning parameters from sigma_list
for the 2nd order quadrature expectations.
The function evidence_SMC
returns a list, containing the following components:
log_evidence
: The logged SMC estimate for the normalising constant
regression_SMC
: The set of \(tau\) zvcv
type returns for the expectations
The function evidence_SMC_CF
returns a list, containing the following components:
log_evidence
: The logged SMC estimate for the normalising constant
regression_SMC
: The set of \(tau\) CF_crossval
type returns for the expectations
selected_CF
: The set of \(tau\) selected tuning parameters from sigma_list
for the expectations
An \(N\) by \(d\) by \(T\) matrix of samples from the \(T\) power posteriors, where \(N\) is the number of samples and \(d\) is the dimension of the target distribution
An \(N\) by \(T\) matrix of log likelihood values corresponding to samples
An \(N\) by \(d\) by \(T\) matrix of the derivatives of the log likelihood with respect to the parameters, with parameter values corresponding to samples
An \(N\) by \(d\) by \(T\) matrix of the derivatives of the log prior with respect to the parameters, with parameter values corresponding to samples
A vector of length \(T\) of temperatures for the power posterior temperatures
An adjusted vector of length \(tau\) of temperatures. Better performance should be obtained with a more conservative temperature schedule. See Expand_Temperatures
for a function to adjust the temperatures.
A vector of length \(tau\) which gives the indices in the original temperatures that the new temperatures correspond to.
(optional) A vector of indices for the estimation-only samples. The default when est_inds
is missing or NULL
is to perform both estimation of the control variates and evaluation of the integral using all samples. Otherwise, the samples from est_inds
are used in estimating the control variates and the remainder are used in evaluating the integral. Splitting the indices in this way can be used to reduce bias from adaption and to make computation feasible for very large sample sizes (small est_inds
is faster), but in general in will increase the variance of the estimator.
A list of control variate specifications for ZV-CV. This can be a single list containing the elements below (the defaults are used for elements which are not specified). Alternatively, it can be a list of lists containing any or all of the elements below. Where the latter is used, the function zvcv
automatically selects the best performing option based on cross-validation.
The number of folds used in k-fold cross-validation for selecting the optimal control variate. For ZV-CV, this may include selection of the optimal polynomial order, regression type and subset of parameters depending on options
. For CF, this includes the selection of the optimal tuning parameters in sigma_list
. The default is five.
(optional) This is the order of the Stein operator. The default is 1
in the control functionals paper (Oates et al, 2017) and 2
in the semi-exact control functionals paper (South et al, 2020). The following values are currently available: 1
for all kernels and 2
for "gaussian", "matern" and "RQ". See below for further details.
(optional) Choose between "gaussian", "matern", "RQ", "product" or "prodsim". See below for further details.
(optional between this and K0_list
) A list of tuning parameters for the specified kernel. This involves a list of single length-scale parameter in "gaussian" and "RQ", a list of vectors containing length-scale and smoothness parameters in "matern" and a list of vectors of the two parameters in "product" and "prodsim". See below for further details. When sigma_list
is specified and not K0_list
, the \(K0\) matrix is computed twice for each selected tuning parameter.
The kernel in Stein-based kernel methods is \(L_x L_y k(x,y)\) where \(L_x\) is a first or second order Stein operator in \(x\) and \(k(x,y)\) is some generic kernel to be specified.
The Stein operators for distribution \(p(x)\) are defined as:
steinOrder=1
: \(L_x g(x) = \nabla_x^T g(x) + \nabla_x \log p(x)^T g(x)\) (see e.g. Oates el al (2017))
steinOrder=2
: \(L_x g(x) = \Delta_x g(x) + \nabla_x log p(x)^T \nabla_x g(x)\) (see e.g. South el al (2020))
Here \(\nabla_x\) is the first order derivative wrt \(x\) and \(\Delta_x = \nabla_x^T \nabla_x\) is the Laplacian operator.
The generic kernels which are implemented in this package are listed below. Note that the input parameter sigma
defines the kernel parameters \(\sigma\).
"gaussian"
: A Gaussian kernel,
$$k(x,y) = exp(-z(x,y)/\sigma^2)$$
"matern"
: A Matern kernel with \(\sigma = (\lambda,\nu)\),
$$k(x,y) = bc^{\nu}z(x,y)^{\nu/2}K_{\nu}(c z(x,y)^{0.5})$$ where \(b=2^{1-\nu}(\Gamma(\nu))^{-1}\), \(c=(2\nu)^{0.5}\lambda^{-1}\) and \(K_{\nu}(x)\) is the modified Bessel function of the second kind. Note that \(\lambda\) is the length-scale parameter and \(\nu\) is the smoothness parameter (which defaults to 2.5 for \(steinOrder=1\) and 4.5 for \(steinOrder=2\)).
"RQ"
: A rational quadratic kernel,
$$k(x,y) = (1+\sigma^{-2}z(x,y))^{-1}$$
"product"
: The product kernel that appears in Oates et al (2017) with \(\sigma = (a,b)\)
$$k(x,y) = (1+a z(x) + a z(y))^{-1} exp(-0.5 b^{-2} z(x,y)) $$
"prodsim"
: A slightly different product kernel with \(\sigma = (a,b)\) (see e.g. https://www.imperial.ac.uk/inference-group/projects/monte-carlo-methods/control-functionals/),
$$k(x,y) = (1+a z(x))^{-1}(1 + a z(y))^{-1} exp(-0.5 b^{-2} z(x,y)) $$
In the above equations, \(z(x) = \sum_j x[j]^2\) and \(z(x,y) = \sum_j (x[j] - y[j])^2\). For the last two kernels, the code only has implementations for steinOrder
=1
. Each combination of steinOrder
and kernel_function
above is currently hard-coded but it may be possible to extend this to other kernels in future versions using autodiff. The calculations for the first three kernels above are detailed in South et al (2020).
Leah F. South
Mira, A., Solgi, R., & Imparato, D. (2013). Zero variance Markov chain Monte Carlo for Bayesian estimators. Statistics and Computing, 23(5), 653-662.
South, L. F., Oates, C. J., Mira, A., & Drovandi, C. (2019). Regularised zero variance control variates for high-dimensional variance reduction. https://arxiv.org/abs/1811.05073
See an example at VDP
and see ZVCV for more package details. See Expand_Temperatures
for a function that can be used to find stricter (or less stricter) temperature schedules based on the conditional effective sample size.