flare-package: flare: Family of Lasso Regression
Description
The package "flare" provides the implementation of a family of Lasso variants including Dantzig Selector, LAD Lasso, SQRT Lasso, Lq Lasso for estimating high dimensional sparse linear model. For Dantzig selector and Lq Lasso, we adopt the alternating direction method of multipliers (ADMM) and convert the original optimization problem into a sequential L1 penalized least square minimization problem, which can be efficiently solved by combining the linearization and the efficient coordinate descent algorithm. For LAD and SQRT Lasso, we adopt the combination of the dual smoothing and monotone fast iterative soft-thresholding algorithm (MFISTA). The computation is memory-optimized using the sparse matrix output. Besides the sparse linear model estimation, we also provide the extension of these Lasso variants to sparse Gaussian graphical model estimation including TIGER and CLIME (ADMM) using either L1 or adaptive L1 penalty.Details
ll{
Package: flare
Type: Package
Version: 0.9.9
Date: 2013-03-31
License: GPL-2
}References
1. A. Belloni, V. Chernozhukov and L. Wang. Pivotal recovery of sparse signals via conic programming. Biometrika, 2012.
2. L. Wang. L1 penalized LAD estimator for high dimensional linear regression. Journal of Multivariate Analysis, 2013.
3. E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 2007.
4. T. Cai, W. Liu and X. Luo. A constrained $\ell_1$ minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 2011.
5. H. Liu and L. Wang. TIGER: A tuning-insensitive approach for optimally estimating large undirected graphs. Technical Report, 2012.
6. A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Transactions on Image Processing, 2009.
7. B. He and X. Yuan. On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers. Technical Report, 2012.
8. J. Liu and J. Ye. Efficient L1/Lq Norm Regularization. Technical Report, 2010.