
Monte Carlo sampling for Bayesian spline regression with an unknown (nonparametric) transformation. Cubic B-splines are used with a prior that penalizes roughness.
sbsm(
y,
x = NULL,
x_test = x,
psi = NULL,
laplace_approx = TRUE,
fixedX = (length(y) >= 500),
approx_g = FALSE,
nsave = 1000,
ngrid = 100,
verbose = TRUE
)
a list with the following elements:
coefficients
the posterior mean of the regression coefficients
fitted.values
the posterior predictive mean at the test points x_test
post_theta
: nsave x p
samples from the posterior distribution
of the regression coefficients
post_ypred
: nsave x n_test
samples
from the posterior predictive distribution at x_test
post_g
: nsave
posterior samples of the transformation
evaluated at the unique y
values
model
: the model fit (here, sbsm
)
as well as the arguments passed in.
n x 1
response vector
n x 1
vector of observation points; if NULL, assume equally-spaced on [0,1]
n_test x 1
vector of testing points; if NULL, assume equal to x
prior variance (inverse smoothing parameter); if NULL, sample this parameter
logical; if TRUE, use a normal approximation to the posterior in the definition of the transformation; otherwise the prior is used
logical; if TRUE, treat the design as fixed (non-random) when sampling the transformation; otherwise treat covariates as random with an unknown distribution
logical; if TRUE, apply large-sample approximation for the transformation
number of Monte Carlo simulations
number of grid points for inverse approximations
logical; if TRUE, print time remaining
This function provides fully Bayesian inference for a
transformed spline regression model using Monte Carlo (not MCMC) sampling.
The transformation is modeled as unknown and learned jointly
with the regression function (unless approx_g = TRUE
, which then uses
a point approximation). This model applies for real-valued data, positive data, and
compactly-supported data (the support is automatically deduced from the observed y
values).
The results are typically unchanged whether laplace_approx
is TRUE/FALSE;
setting it to TRUE may reduce sensitivity to the prior, while setting it to FALSE
may speed up computations for very large datasets. By default, fixedX
is
set to FALSE for smaller datasets (n < 500
) and TRUE for larger datasets (n >= 500
).
# \donttest{
# Simulate some data:
n = 200 # sample size
x = sort(runif(n)) # observation points
# Transform a noisy, periodic function:
y = g_inv_bc(
sin(2*pi*x) + sin(4*pi*x) + rnorm(n),
lambda = .5) # Signed square-root transformation
# Fit the semiparametric Bayesian spline model:
fit = sbsm(y = y, x = x)
names(fit) # what is returned
# Note: this is Monte Carlo sampling...no need for MCMC diagnostics!
# Plot the model predictions (point and interval estimates):
pi_y = t(apply(fit$post_ypred, 2, quantile, c(0.05, .95))) # 90% PI
plot(x, y, type='n', ylim = range(pi_y,y),
xlab = 'x', ylab = 'y', main = paste('Fitted values and prediction intervals'))
polygon(c(x, rev(x)),c(pi_y[,2], rev(pi_y[,1])),col='gray', border=NA)
lines(x, y, type='p') # observed points
lines(x, fitted(fit), lwd = 3) # fitted curve
# }
Run the code above in your browser using DataLab