This function fits a Bayesian quantile regression model.
blq(formula, data = NULL, p, mcmc = list(), prior = list(), marginal.likelihood = TRUE)
An object of class blm
representing the Bayesian parametric linear model fit.
Generic functions such as print
and fitted
have methods to show the results of the fit.
The MCMC samples of the parameters in the model are stored in the list mcmc.draws
,
the posterior samples of the fitted values are stored in the list fit.draws
, and
the MCMC samples for the log marginal likelihood are saved in the list loglik.draws
.
The output list also includes the following objects:
posterior estimates for all parameters in the model.
log marginal likelihood using Gelfand-Dey method.
correlation between \(y\) and \(\hat{y}\).
the matched call.
running time of Markov chain from system.time()
.
an object of class “formula
”
an optional data frame.
quantile of interest (default=0.5).
a list giving the MCMC parameters.
The list includes the following integers (with default values in parentheses):
nblow (1000)
giving the number of MCMC in transition period,
nskip (1)
giving the thinning interval,
smcmc (1000)
giving the number of MCMC for analysis.
a list giving the prior information. The list includes the following parameters
(default values specify the non-informative prior):
beta_m0
and beta_v0
giving the hyperparameters of the
multivariate normal distribution for parametric part including intercept,
sigma2_m0
and sigma2_v0
giving the prior mean and variance
of the inverse gamma prior for the scale parameter of response.
a logical variable indicating whether the log marginal likelihood is calculated. The methods of Gelfand and Dey (1994) is used.
This generic function fits a Bayesian quantile regression model.
Let \(y_i\) and \(w_i\) be the response and the vector of parametric predictors, respectively. Further, let \(x_{i,k}\) be the covariate related to the response, linearly. The model is as follows.
$$y_i = w_i^T\beta + \epsilon_i, ~ i=1,\ldots,n, $$ where the error terms \(\{\epsilon_i\}\) are a random sample from an asymmetric Laplace distribution, \(ALD_p(0,\sigma^2)\), which has the following probability density function: $$ALD_p(\epsilon; \mu, \sigma^2) = \frac{p(1-p)}{\sigma^2}\exp\Big(-\frac{(x-\mu)[p - I(x \le \mu)]}{\sigma^2}\Big),$$ where \(0 < p < 1\) is the skew parameter, \(\sigma^2 > 0\) is the scale parameter, \(-\infty < \mu < \infty\) is the location parameter, and \(I(\cdot)\) is the indication function.
The conjugate priors are assumed for \(\beta\) and \(\sigma\): $$\beta | \sigma \sim N(m_{0,\beta}, \sigma^2V_{0,\beta}), \quad \sigma^2 \sim IG\Big(\frac{r_{0,\sigma}}{2}, \frac{s_{0,\sigma}}{2}\Big)$$
Gelfand, A. E. and Dey, K. K. (1994) Bayesian model choice: asymptotics and exact calculations. Journal of the Royal Statistical Society. Series B - Statistical Methodology, 56, 501-514.
Kozumi, H. and Kobayashi, G. (2011) Gibbs sampling methods for Bayesian quantile regression. Journal of Statistical Computation and Simulation, 81(11), 1565-1578.
blr
, gblr
#####################
# Simulated example #
#####################
# Simulate data
set.seed(1)
n <- 100
w <- runif(n)
y <- 3 + 2*w + rald(n, scale = 0.8, p = 0.5)
# Fit median regression
fout <- blq(y ~ w, p = 0.5)
# Summary
print(fout); summary(fout)
# fitted values
fit <- fitted(fout)
# Plots
plot(fout)
Run the code above in your browser using DataLab