Buckley-James regression for right-censoring survival data with high-dimensional covariates. Including L_2 boosting with componentwise linear least squares, componentwise P-splines, regression trees. Other Buckley-James methods including elastic net, MCP, SCAD, MARS and ACOSSO (ACOSSO not supported for the current version).
bujar(y, cens, x, valdata = NULL, degree = 1, learner = "linear.regression",
center=TRUE, mimpu = NULL, iter.bj = 20, max.cycle = 5, nu = 0.1, mstop = 50,
twin = FALSE, mstop2= 100, tuning = TRUE, cv = FALSE, nfold = 5, method = "corrected",
vimpint = TRUE,gamma = 3, lambda=NULL, whichlambda=NULL, lamb = 0, s = 0.5, nk = 4,
wt.pow = 1, theta = NULL, rel.inf = FALSE, tol = .Machine$double.eps, n.cores= 2,
rng=123, trace = FALSE)
# S3 method for bujar
print(x, ...)
# S3 method for bujar
predict(object, newx=NULL, ...)
# S3 method for bujar
plot(x, ...)
# S3 method for bujar
coef(object, ...)
# S3 method for bujar
summary(object, ...)
original covariates
survival time
censoring indicator
imputed y
estimated y from ynew
estimated y from the testing sample
model fitted with the learner
original learner used
=1, additive model, degree=2, second-order interaction
MSE at each BJ iteration, only available in simulations, or when valdata provided
MSE from training data at the BJ termination
MSE with valdata
a vector of MSE for uncensoring data at BJ iteration
number of selected covariates at each BJ iteration
number of selected covariates at the claimed BJ termination
a vector of dimension of covariates, either 1 (covariate selected) or 0 (not selected)
estimated coefficients with linear model
a vector of length of number of column of x, variable importance, between 0 to 100
measure of strength of interactions
largest absolute difference of estimated y. Useful to monitor convergency
a vector with length of BJ iteration each is a convergency measure
number of cycle of BJ iteration
within cycle of BJ, the maximum difference of coefficients for BJ boosting
logical value. if TRUE, non-convergency
value of L_2 norm, can be useful to access convergency
a vector of length of BJ iteration, each element is the tuning parameter mstop
0 (converged), 1, not converged but cycle found, 2, not converged and max iteration reached.
survival time
censoring indicator, must be 0 or 1 with 0=alive, 1=dead
covariate matrix
an object of class "bujar"
covariate matrix for prediction
test data, which must have the first column as survival time, second column as censoring indicator, and the remaining columns similar to same x.
mars/tree/linear regression degree of interaction; if 2, second-order interaction, if degree=1, additive model;
methods used for BJ regression.
center covariates
initial estimate. If TRUE, mean-imputation; FALSE, imputed with the marginal best variable linear regression; if NULL, 0.
number of B-J iteration
max cycle allowed
step-size boosting parameter
boosting tuning parameters. It can be one number or have the length iter.bj
+max.cycle
. If cv=TRUE
, then mstop
is the maximum number of tuning parameter
logical, if TRUE, twin boosting
twin boosting tuning parameter
logical value. if TRUE, the tuning parameter will be selected by cv or AIC/BIC methods. Ignored if twin=TRUE
for which no tuning parameter selection is implemented
logical value. if TRUE, cross-validation for tuning parameter, only used if tuning=TRUE
. If tuning=FALSE
or twin=TRUE
, then ignored
number of fold of cv
boosting tuning parameter selection method in AIC
logical value. If TRUE, compute variable importance and interaction measures for MARS if learner="mars"
and degree
> 1.
MCP, or SCAD gamma tuning parameter
MCP, or SCAD lambda tuning parameter
which lambda used for MCP or SCAD lambda tuning parameter
elastic net lambda tuning parameter, only used if learner="enet"
the second enet tuning parameter, which is a fraction between (0, 1), only used if learne="enet"
number of basis function for learner="mars"
not used but kept for historical reasons, only for learner=ACOSSO
. This is a parameter (power of weight). It might be chosen by CV from c(0, 1.0, 1.5, 2.0, 2.5, 3.0). If wt.pow=0, then this is COSSO method
For learner="acosso"
, not used now. A numerical vector with 0 or 1. 0 means the variable not included and 1 means included. See Storlie et al. (2009).
logical value. if TRUE, variable importance measure and interaction importance measure computed
convergency criteria
The number of CPU cores to use. The cross-validation loop
will attempt to send different CV folds off to different cores. Used for learner="tree"
a number to be used for random number generation in boosting trees
logical value. If TRUE, print out interim computing results
additional arguments used in estimation methods, for instance, trees.
Zhu Wang
Buckley-James regression for right-censoring survival data with high-dimensional covariates. Including L_2 boosting with componentwise linear least squares, componentwise P-splines, regression trees. Other Buckley-James methods including elastic net, SCAD and MCP. learner="enet"
and learner="enet2"
use two different implementations of LASSO. Some of these methods are discussed in Wang and Wang (2010) and the references therein. Also see the references below.
Zhu Wang and C.Y. Wang (2010), Buckley-James Boosting for Survival Analysis with High-Dimensional Biomarker Data. Statistical Applications in Genetics and Molecular Biology, Vol. 9 : Iss. 1, Article 24.
Peter Buhlmann and Bin Yu (2003), Boosting with the L2 loss: regression and classification. Journal of the American Statistical Association, 98, 324--339.
Peter Buhlmann (2006), Boosting for high-dimensional linear models. The Annals of Statistics, 34(2), 559--583.
Peter Buhlmann and Torsten Hothorn (2007), Boosting algorithms: regularization, prediction and model fitting. Statistical Science, 22(4), 477--505.
J. Friedman (1991), Multivariate Adaptive Regression Splines (with discussion) . Annals of Statistics, 19/1, 1--141.
J.H. Friedman, T. Hastie and R. Tibshirani (2000), Additive Logistic Regression: a Statistical View of Boosting. Annals of Statistics 28(2):337-374.
C. Storlie, H. Bondell, B. Reich and H. H. Zhang (2009), Surface Estimation, Variable Selection, and the Nonparametric Oracle Property. Statistica Sinica, to appear.
Sijian Wang, Bin Nan, Ji Zhu, and David G. Beer (2008), Doubly penalized Buckley-James Method for Survival Data with High-Dimensional Covariates. Biometrics, 64:132-140.
H. Zou and T. Hastie (2005), Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67, 301-320.
data("wpbc", package = "TH.data")
wpbc2 <- wpbc[, 1:12]
wpbc2$status <- as.numeric(wpbc2$status) - 1
fit <- bujar(y=log(wpbc2$time),cens=wpbc2$status, x= wpbc2[, -(1:2)])
print(fit)
coef(fit)
pr <- predict(fit)
plot(fit)
fit <- bujar(y=log(wpbc2$time),cens=wpbc2$status, x= wpbc2[, -(1:2)], tuning = TRUE)
if (FALSE) {
fit <- bujar(y=log(wpbc2$time),cens=wpbc2$status, x=wpbc2[, -(1:2)], learner="pspline")
fit <- bujar(y=log(wpbc2$time),cens=wpbc2$status, x=wpbc2[, -(1:2)],
learner="tree", degree=2)
### select tuning parameter for "enet"
tmp <- gcv.enet(y=log(wpbc2$time), cens=wpbc2$status, x=wpbc2[, -(1:2)])
fit <- bujar(y=log(wpbc2$time),cens=wpbc2$status, x=wpbc2[, -(1:2)], learner="enet",
lamb = tmp$lambda, s=tmp$s)
fit <- bujar(y=log(wpbc2$time),cens=wpbc2$status, x=wpbc2[, -(1:2)], learner="mars",
degree=2)
summary(fit)
}
Run the code above in your browser using DataLab