Learn R Programming

ACSSpack (version 1.0.0.2)

ACSS_gs: ACSS algorithm

Description

Adaptive Correlated Spike and Slab (ACSS) algorithm with/without adaptive burn-in Gibbs sampler. See paper of Yang, Z., Khare, K., & Michailidis, G. (2024) for details.

Usage

ACSS_gs(
  Y,
  X,
  para = c("1sqrtp", "11", "4sqrtp", "Unif", "customize"),
  a = 1,
  b = 1,
  c = 1,
  s = NA,
  Max_burnin = 10,
  nmc = 5000,
  adaptive_burn_in = TRUE
)

Value

A list with betahat: predicted beta hat from majority voting, and Gibbs_res: 5000 samples of beta, q and lambda^2 from Gibbs sampler.

Arguments

Y

A (centered) vector.

X

A (centered in column) matrix.

para

Parameter pre-set. Options include "1sqrtp" (default), "11", "4sqrtp", "Unif", and "customize". See details below.

  • "1sqrtp": a=1, b=1, c=1, s=sqrt(p) (default)

  • "11": a=1, b=1, c=1, s=1

  • "4sqrtp": a=1, b=1, c=4, s=sqrt(p)

  • "Unif": a=1, b=1, c=0.8, s=p/1.2

  • "customize": user-defined a,b,c and s. If choose this option, please provide values for a,b,c and s.

If no parameter pre-set is provided, the default "1sqrtp" will be used.

a

shape parameter for marginal of q; default=1.

b

shape parameter for marginal of q; default=1.

c

shape parameter for marginal of lambda^2; larger c introduce more shrinkage and stronger correlation. default=1.

s

scale (inversed) parameter for marginal of lambda^2; larger s introduce more shrinkage; default=sqrt(p).

Max_burnin

Maximum burn-in (in 100 steps) for adaptive burn-in Gibbs sampler. Minimum value is 10, corresponding to 1000 hard burn-insteps. Default=10.

nmc

Number of MCMC samples. Default=5000.

adaptive_burn_in

Logical. If TRUE, use adaptive burn-in Gibbs sampler; If false, use fixed burn-in with burn-in = Max_burnin. Default=TRUE.

Examples

Run this code
## A toy example is given below to save time. The full example can be run to get better results
## by letting nmc=5000 (default).

n = 30;
p = n;

beta1 = rep(0.1, p);
beta2 = c(rep(0.2, p / 2), rep(0, p / 2));
beta3 = c(rep(0.15, 3 * p / 4), rep(0, ceiling(p / 4)));
beta4 = c(rep(1, p / 4), rep(0, ceiling(3 * p / 4)));
beta5 = c(rep(3, ceiling(p / 20)), rep(0 , 19 * p / 20));
betas = list(beta1, beta3, beta2, beta4, beta5);

set.seed(123);
X = matrix(rnorm(n * p), n, p);
Y = c(X %*% betas[[1]] + rnorm(n));

## A toy example with p=30, total Gibbs steps=1100, takes ~0.6s
system.time({mod = ACSS_gs(Y, X, para = "1sqrtp", nmc = 100);})

mod$beta; ## estimated beta after the Majority voting
hist(mod$Gibbs_res$betamat[1,]); ## histogram of the beta_1
hist(mod$Gibbs_res$q); ## histogram of the q
hist(log(mod$Gibbs_res$lambdasq)); ## histogram of the log(lambda^2)
hist(mod$Gibbs_res$R_2); ## histogram of the R^2 for each iteration in Gibbs sampler
plot(mod$Gibbs_res$q, type = "l"); ## trace plot of the q
## joint posterior of model density and shrinkage
plot(log(mod$Gibbs_res$q / (1 - mod$Gibbs_res$q)), -log(mod$Gibbs_res$lambdasq),
    xlab = "logit(q)", ylab = "-log(lambda^2)",
    main = "Joint Posterior of Model Density and Shrinkage"); 

Run the code above in your browser using DataLab