Set computational options for the sampling algorithms
sampler_control(
add.outer.R = TRUE,
add.eps.I = FALSE,
eps = sqrt(.Machine$double.eps),
recompute.e = TRUE,
cMVN.sampler = FALSE,
CG = NULL,
block = TRUE,
block.V = TRUE,
auto.order.block = TRUE,
chol.control = chol_control(),
max.size.cps.template = 100,
PG.approx = TRUE,
PG.approx.m = -2L
)
A list with specified computational options used by various sampling functions.
whether to add the outer product of a constraint matrix to the
conditional posterior precision matrix of coefficients sampled in a block. This is used
to resolve singularity due to intrinsic GMRF components.
By default, add.outer.R=NULL
, a simple heuristic is used to decide whether
to add the outer product of possibly a submatrix of the constraint matrix.
whether to add a small positive multiple of the identity matrix
to the conditional posterior precision matrix of coefficients sampled in a block.
If needed, this can resolve singularity as an alternative to add.outer.R=TRUE
.
The advantage of add.eps.I=TRUE
is that a sparse conditional posterior precision
matrix remains sparse so that sampling is faster, at the cost of slightly deviating
from the target posterior distribution, depending on the value of eps
.
If add.eps.I=TRUE
add.outer.R
will be set to FALSE.
a positive scalar value, used only in case add.eps.I=TRUE
. This
should be a small value to ensure that one is not deviating too much from the
desired posterior distribution of coefficients sampled in a block. On the other
hand, if it is chosen too small it may not resolve the singularity of the conditional
posterior precision matrix of coefficients sampled in a block.
when FALSE
, residuals or linear predictors are only computed at the start of the simulation.
This may give a modest speed-up but in some cases may be less accurate due to round-off error accumulation.
Default is TRUE
.
whether an extended linear system including dual variables is used
for equality constrained multivariate normal sampling. If set to TRUE
this may
improve the performance of the blocked Gibbs sampler, especially in case of a large number
of equality constraints, typically (intrinsic) GMRF identifiability constraints.
use a conjugate gradient iterative algorithm instead of Cholesky updates for sampling
the model's coefficients. This must be a list with possible components max.it
,
stop.criterion
, verbose
, preconditioner
and scale
.
See the help for function CG_control
, which can be used to specify these options.
Conjugate gradient sampling is currently an experimental feature that can be used for
blocked Gibbs sampling but with some limitations.
if TRUE
, the default, all coefficients are sampled in a single Gibbs block.
If FALSE
, the coefficients of each model component are sampled separately in sequence.
Alternatively, a list of character vectors with names of model components can be passed to
specify a grouping of model components whose coefficients should be sampled together in blocks.
if TRUE
, the default, all coefficients of reg
and gen
components
in a variance model formula are sampled in a single block. Alternatively, a list of
character vectors with names of model components whose coefficients should be sampled together in blocks.
whether Gibbs blocks should be ordered automatically in such a way that those with the most sparse design matrices come first. This way of ordering can make Cholesky updates more efficient.
options for Cholesky decomposition, see chol_control
.
maximum allowed size in MB of the sparse matrix serving as a template for the sparse symmetric crossproduct X'QX of a dgCMatrix X, where Q is a diagonal matrix subject to change.
whether Polya-Gamma draws for logistic binomial models are
approximated by a hybrid gamma convolution approach. If not, BayesLogit::rpg
is used, which is exact for some values of the shape parameter.
if PG.approx=TRUE
, the number of explicit gamma draws in the
sum-of-gammas representation of the Polya-Gamma distribution. The remainder (infinite)
convolution is approximated by a single moment-matching gamma draw. Special values are:
-2L
for a default choice depending on the value of the shape parameter
balancing performance and accuracy, -1L
for a moment-matching normal approximation,
and 0L
for a moment-matching gamma approximation.
D. Bates, M. Maechler, B. Bolker and S.C. Walker (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software 67(1), 1-48.
Y. Chen, T.A. Davis, W.W. Hager and S. Rajamanickam (2008). Algorithm 887: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate. ACM Transactions on Mathematical Software 35(3), 1-14.