Linearly approximate a part of the objective function to greatly speed up computations.
lla(b.o,
lmb.rho,
bm_gm,
nu,
nstep.lla = 100L,
eps.lla = 1E-6)
Vector of the estimated sparse-solution.
Convergence check (0 if converged).
Number of iterations done.
Vector of sparse-solution.
Lambda-rho ratio.
Vector of pseudo-solution
Shape parameter of the penalty.
Maximum number of iterations of the LLA-algorithm (if used).
Convergence threshhold of the LLA-algorithm (if used).
The LLA approximation allows the computationally intensive part to be treated as a weighted LASSO (Tibshirani, 1996) problem. In this way the computational effort is significantly less while maintaining satisfactory accuracy of the results. See Zou and Li (2008).
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288.
Zou, H. and Li, R. (2008). One-step sparse estimates in nonconcave penalized likelihood models. Annals of statistics, 36(4):1509