Set computational options for the sampling algorithms
sampler_control(
add.outer.R = TRUE,
recompute.e = TRUE,
expanded.cMVN.sampler = FALSE,
CG = NULL,
block = TRUE,
block.V = TRUE,
auto.order.block = TRUE,
chol.control = chol_control(),
max.size.cps.template = 100,
PG.approx = TRUE,
PG.approx.m = -2L,
CRT.approx.m = 20L
)
A list with specified computational options used by various sampling functions.
whether to add the outer product of a constraint matrix for a better conditioned
linear system of equations, typically for coefficients sampled in a Gibbs-block. Default is TRUE
.
If NULL
, a simple heuristic is used to decide whether to add the outer product
of possibly a submatrix of the constraint matrix.
when FALSE
, residuals or linear predictors are only computed at the start of the simulation.
This may give a modest speedup but in some cases may be less accurate due to round-off error accumulation.
Default is TRUE
.
whether an expanded linear system including dual variables is used
for equality constrained multivariate normal sampling. If set to TRUE
this may
improve the performance of the blocked Gibbs sampler in case of a large number of equality
constraints, typically GMRF identifiability constraints.
use a conjugate gradient iterative algorithm instead of Cholesky updates for sampling
the model's coefficients. This must be a list with possible components max.it
,
stop.criterion
, verbose
, preconditioner
and scale
.
See the help for function CG_control
, which can be used to specify these options.
Conjugate gradient sampling is currently an experimental feature that can be used for
blocked Gibbs sampling but with some limitations.
if TRUE
, the default, all coefficients are sampled in a single block. Alternatively, a list of
character vectors with names of model components whose coefficients should be sampled together in blocks.
if TRUE
, the default, all coefficients of reg
and gen
components
in a variance model formula are sampled in a single block. Alternatively, a list of
character vectors with names of model components whose coefficients should be sampled together in blocks.
whether Gibbs blocks should be ordered automatically in such a way that those with the most sparse design matrices come first. This way of ordering can make Cholesky updates more efficient.
options for Cholesky decomposition, see chol_control
.
maximum allowed size in MB of the sparse matrix serving as a template for the sparse symmetric crossproduct X'QX of a dgCMatrix X, where Q is a diagonal matrix subject to change.
whether Polya-Gamma draws for logistic binomial models are
approximated by a hybrid gamma convolution approach. If not, BayesLogit::rpg
is used, which is exact for some values of the shape parameter.
if PG.approx=TRUE
, the number of explicit gamma draws in the
sum-of-gammas representation of the Polya-Gamma distribution. The remainder (infinite)
convolution is approximated by a single moment-matching gamma draw. Special values are:
-2L
for a default choice depending on the value of the shape parameter
balancing performance and accuracy, -1L
for a moment-matching normal approximation,
and 0L
for a moment-matching gamma approximation.
scalar integer specifying the degree of approximation to sampling from a Chinese Restaurant Table distribution. The approximation is based on Le Cam's theorem. Larger values yield a slower but more accurate sampler.
D. Bates, M. Maechler, B. Bolker and S.C. Walker (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software 67(1), 1-48.
Y. Chen, T.A. Davis, W.W. Hager and S. Rajamanickam (2008). Algorithm 887: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate. ACM Transactions on Mathematical Software 35(3), 1-14.