Sparse regularized adaptive Huber regressionwith "lasso" penalty. The function implements a localized majorize-minimize algorithm with a gradient-based method. The regularization parameter \(\lambda\) is selected by cross-validation, and the robustification parameter \(\tau\) is determined by a tuning-free principle.
adaHuber.cv.lasso(
X,
Y,
lambdaSeq = NULL,
kfolds = 5,
numLambda = 50,
phi0 = 0.01,
gamma = 1.2,
epsilon = 0.001,
iteMax = 500
)A \(n\) by \(p\) design matrix. Each row is a vector of observation with \(p\) covariates.
An \(n\)-dimensional response vector.
(optional) A sequence of candidate regularization parameters. If unspecified, a reasonable sequence will be generated.
(optional) Number of folds for cross-validation. Default is 5.
(optional) Number of \(\lambda\) values for cross-validation if lambdaSeq is unspeficied. Default is 50.
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01.
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2.
(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than epsilon. Default is 0.001.
(optional) Maximum number of iterations. Default is 500.
An object containing the following items will be returned:
coefA \((p + 1)\) vector of estimated sparse regression coefficients, including the intercept.
lambdaSeqThe sequence of candidate regularization parameters.
lambdaRegularization parameter selected by cross-validation.
tauThe robustification parameter calibrated by the tuning-free principle.
iterationNumber of iterations until convergence.
phiThe quadratic coefficient parameter in the local adaptive majorize-minimize algorithm.
Pan, X., Sun, Q. and Zhou, W.-X. (2021). Iteratively reweighted l1-penalized robust regression. Electron. J. Stat., 15, 3287-3348.
Sun, Q., Zhou, W.-X. and Fan, J. (2020). Adaptive Huber regression. J. Amer. Statist. Assoc., 115 254-265.
Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2021). A new principle for tuning-free Huber regression. Stat. Sinica, 31, 2153-2177.
See adaHuber.lasso for regularized adaptive Huber regression with a specified \(lambda\).
# NOT RUN {
n = 100; p = 200; s = 5
beta = c(rep(1.5, s + 1), rep(0, p - s))
X = matrix(rnorm(n * p), n, p)
err = rt(n, 2)
Y = cbind(rep(1, n), X) %*% beta + err
fit.lasso = adaHuber.cv.lasso(X, Y)
beta.lasso = fit.lasso$coef
# }
Run the code above in your browser using DataLab