Fit a PLS Cox model where the PLS components are computed once and the Cox regression in the latent space is optimised by gradient based methods.
This function is intended as a faster, approximate alternative to
big_pls_cox_fast() when many fits are required or when the design
is stored as a bigmemory::big.matrix.
big_pls_cox_gd(
X,
time,
status,
ncomp = 2L,
max_iter = 2000L,
tol = 1e-08,
learning_rate = 0.05,
keepX = NULL,
coxfit = TRUE,
method = c("gd", "bb", "nesterov", "bfgs"),
diag = TRUE
)An object of class "big_pls_cox_gd" that contains:
coefficients: Estimated Cox regression coefficients on the latent scores.
loglik: Final partial log-likelihood value.
iterations: Number of gradient-descent iterations performed.
converged: Logical flag indicating whether convergence was achieved.
scores: Matrix of latent score vectors (one column per component).
loadings: Matrix of loading vectors associated with each component.
weights: Matrix of PLS weight vectors.
center: Column means used to centre the predictors.
scale: Column scales (standard deviations) used to standardise the predictors.
keepX: Vector describing the number of predictors retained per component.
time: Numeric vector of follow-up times.
status: Numeric or integer vector containing the event indicators.
loglik_trace: Trace of the loglik.
step_trace: Trace of the steps
gradnorm_trace: Trace of the gradnorm.
cox_fit: Final Cox model fitted on the components.
A bigmemory::big.matrix containing
the design matrix (rows are observations).
A numeric vector of follow-up times with length equal to the
number of rows of X.
A numeric or integer vector of the same length as time
containing the event indicators (1 for an event, 0 for censoring).
An integer giving the number of components (columns) to use from
X. Defaults to min(5, ncol(X)).
Maximum number of gradient iterations.
Convergence tolerance for the optimisation in the Cox space. Both the change in log-likelihood and the Euclidean change in the coefficient vector are monitored.
Initial learning rate for first order methods.
This is used by "gd" and as a starting scale for "bb" and
"nesterov". It is ignored by "bfgs".
Optional integer vector describing the number of predictors to retain per component (naive sparsity). A value of zero keeps all predictors.
Optional Boolean to fit a Cox model on the extracted components.
Optimisation scheme used in the latent space. One of
"gd"Simple fixed step gradient descent. This is the most transparent method and is useful for debugging and didactic purposes.
"bb"Barzilai Borwein step size. Uses a diagonal
quasi-second-order update of the step size based on the last
two gradients. Often converges faster than "gd" at similar
computational cost.
"nesterov"Nesterov type accelerated gradient with a fixed momentum schedule. Can yield smoother convergence in early iterations, at the price of slightly more bookkeeping.
"bfgs"Quasi Newton update in the latent space, with a limited memory BFGS type approximation of the Hessian. This gives the most accurate coefficients in a small number of iterations but requires more linear algebra per step.
The default is "bb", which provides a good balance between speed
and robustness in most examples.
Logical. If TRUE, store iteration level diagnostics.
The function first computes PLS components using the same procedure
as big_pls_cox_fast(), then holds these components fixed and
optimises the Cox partial log-likelihood in the reduced space.
The coefficients stored in fit$coefficients are the result of the
chosen optimisation method and are approximate. The field
fit$cox_fit contains the Cox model refitted by
survival::coxph() on the final PLS scores. Prediction functions
use the coefficients from cox_fit so that linear predictors are
directly interpretable as Cox risk scores.
The optimisation tolerances control the compromise between speed
and accuracy. If you need very close agreement with the exact PLS
Cox solution, use big_pls_cox_fast(). If you only need accurate
risk rankings and want to fit many models, the gradient based
method with "bb" or "bfgs" is usually sufficient.
Maumy, M., Bertrand, F. (2023). PLS models and their extension for big data. Joint Statistical Meetings (JSM 2023), Toronto, ON, Canada.
Maumy, M., Bertrand, F. (2023). bigPLS: Fitting and cross-validating PLS-based Cox models to censored big data. BioC2023 — The Bioconductor Annual Conference, Dana-Farber Cancer Institute, Boston, MA, USA. Poster. https://doi.org/10.7490/f1000research.1119546.1
Bastien, P., Bertrand, F., Meyer, N., & Maumy-Bertrand, M. (2015). Deviance residuals-based sparse PLS and sparse kernel PLS for censored data. Bioinformatics, 31(3), 397–404. doi:10.1093/bioinformatics/btu660
Bertrand, F., Bastien, P., Meyer, N., & Maumy-Bertrand, M. (2014). PLS models for censored data. In Proceedings of UseR! 2014 (p. 152).
big_pls_cox(), predict.big_pls_cox(), select_ncomp(),
computeDR().
library(bigmemory)
set.seed(1)
n <- 50
p <- 10
X <- bigmemory::as.big.matrix(matrix(rnorm(n * p), n, p))
time <- rexp(n, rate = 0.1)
status <- rbinom(n, 1, 0.7)
fit <- big_pls_cox_gd(X, time, status, ncomp = 3, max_iter = 200)
str(fit)
head(fit$scores)
Run the code above in your browser using DataLab