Last chance! 50% off unlimited learning
Sale ends in
bcpglm(formula, link = "log", data, inits = NULL,
weights, offset, subset, na.action, contrasts = NULL,
n.chains = 3, n.iter = 2000, n.burnin = floor(n.iter/2),
n.thin = max(1, floor(n.chains * (n.iter - n.burnin)/n.sims)),
n.sims = 1000, n.report = 1000, prior.beta.mean, prior.beta.var,
bound.phi=100, bound.p = c(1.01, 1.99),method="dtweedie",
tune.iter=4000, n.tune=10, tune.weight=0.25,...)
formula
. See also in glm
.link.power
n.chains
. For each element, it is a named list with three components 'beta', 'phi' and 'p'. If not supplied, the function will generate initial values automatically, whiNULL
or a numeric vector. When it is numeric, it must be positive. Zero weights are not allowed in bcpglm
.NULL
or a numeric vector of length equal to the number of cases. One or more offset terms can be included in the for3
).2000
)n.iter/2
, that is, discarding the first half of the simulations.max(1, floor(n.chains*(n.iter-n.burnin) / 1000))
which will only thin if there are at
least 2000 simulations.n.report
iterations.10000
's.100
.c(1.01,1.99)
.method="dtweedie"
(the default), then full conditionals are computed using numerical methods to approximate the tweedie density. If method="latent"
, then the latent Poisson variables are4000
, and set it to be zero if the tuning process is not desired.tune.iter
iterations will be divided into n.tune
loops. Default is 10
.0.25
.bcpglm
returns an object of class "bcpglm"
. See bcpglm-class
for details of the return values as well as various methods available for this class.coda
method="latent"
). The first method is more general while the second method is much faster when it works.
Since this is a Bayesian model, prior distributions have to be specified for all parameters in the model. Here, Normal distributions are used for the model coefficients ($\beta$), a Uniform distribution for the dispersion parameter ($\phi$) and a Uniform distribution for the index parameter ($p$). Prior means and variances for the model coefficients can be supplied using the argument prior.beta.mean
and prior.beta.var
, respectively. The prior distribution for $\phi$ is uniform on (0, bound.phi
). And the bounds of the Uniform for $p$ can be specified in the argument bound.p
. See details in Section 'Arguments'.
In implementing the MCMC, a Gibbs sampler is constructed as follows:
method="latent"
, simulation of the latent Poisson variables relies on rejection sampling, where zero-truncated Poisson distributions are used as proposals.cpglm
fit with the specified model structure.bcpglm-class
, cpglm
, mcmc
, and tweedie
for related information.# fit the fineroot data with Bayesian models
# first use the latent variable approach
set.seed(10)
fit1 <- bcpglm(RLD~ factor(Zone)*factor(Stock), data=fineroot,
n.iter=11000, n.burnin=1000,
n.thin=10, n.report=5000, method="latent")
gelman.diag(fit1$sims.list)
# diagnostic plots
acfplot(fit1$sims.list,lag.max=20)
xyplot(fit1$sims.list)
densityplot(fit1$sims.list)
# summary results
summary(fit1)
# now try direct density evaluation (This is much slower)
set.seed(10)
fit2 <- bcpglm(RLD~ factor(Zone)*factor(Stock), data=fineroot,
n.iter=11000, n.burnin=1000,
n.thin=10, n.report=5000)
gelman.diag(fit2$sims.list)
summary(fit2)
# now fit the Bayesian model to an insurance loss triangle
# (see Peters et al. 2009)
fit3 <- bcpglm(increLoss~ factor(year)+factor(lag), data=insLoss,
tune.iter=5000, n.tune = 10,
n.iter=11000, n.burnin=1000,
n.thin=10, n.report=5000, bound.p=c(1.1,1.95))
gelman.diag(fit3$sims.list)
summary(fit3)
Run the code above in your browser using DataLab