Learn R Programming

flare (version 0.9.6)

flare.slim: Sparse Linear Regression using Non-smooth Loss Functions and L1 Regularization

Description

The function "flare.slim" implements a family of Lasso variants for estimating high dimensional sparse linear models including Dantzig Selector, LAD Lasso, SQRT Lasso, Lq Lasso for estimating high dimensional sparse linear model. We adopt the alternating direction method of multipliers (ADMM) and convert the original optimization problem into a sequential L1-penalized least square minimization problem, which can be efficiently solved by combining the linearization and the efficient coordinate descent algorithm. The computation is memory-optimized using the sparse matrix output.

Usage

flare.slim(X, Y,  lambda = NULL, nlambda = NULL,
       lambda.min.ratio = NULL, rho = NULL, method="lq", 
       q = 2, prec = 1e-3, max.ite = 1e3, verbose = TRUE)

Arguments

Y
The $n$ dimensional response vector.
X
The $n$ by $d$ design matrix.
lambda
A sequence of decresing positive numbers to control the regularization. Typical usage is to leave the input lambda = NULL and have the program compute its own lambda sequence based on nlambda and lambda.min.rat
nlambda
The number of values used in lambda. Default value is 5.
lambda.min.ratio
The smallest value for lambda, as a fraction of the uppperbound (MAX) of the regularization parameter. The program can automatically generate lambda as a sequence of length = nlambda starting from
rho
The penalty parameter used in ADMM. The default value is $\sqrt{d}$.
method
Dantzig selector is applied if method = "dantzig" and $L_q$ Lasso is applied if method = "lq". The default value is "lq".
q
The loss function used in Lq Lasso. It is only applicable when method = "lq" and must be in $[1,2]$. The default value is 2.
prec
Stopping criterion. The default value is 1e-3.
max.ite
The iteration limit. The default value is 1e3.
verbose
Tracing information printing is disabled if verbose = FALSE. The default value is TRUE.

Value

  • An object with S3 class "flare.slim" is returned:
  • betaA matrix of regression estimates whose columns correspond to regularization parameters.
  • interceptThe value of intercepts corresponding to regularization parameters.
  • YThe value of Y used in the program.
  • XThe value of X used in the program.
  • lambdaThe sequence of regularization parameters lambda used in the program.
  • nlambdaThe number of values used in lambda.
  • methodThe method from the input.
  • sparsityThe sparsity levels of the solution path.
  • iteA list of vectors where ite[[1]] is the number of external iteration and ite[[2]] is the number of internal iteration with the i-th entry corresponding to the i-th regularization parameter.
  • verboseThe verbose from the input.

Details

Dantzig selector solves the following optimization problem $$\min || \beta ||_1, \quad \textrm{s.t. } || X'(Y - X \beta) ||_{\infty} < \lambda$$ $L_q$ loss Lasso solves the following optimization problem $$\min n^{-\frac{1}{q}}|| Y - X \beta ||_q + \lambda || \beta ||_1$$ where $1

References

1. E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 2007. 2. A. Belloni, V. Chernozhukov and L. Wang. Pivotal recovery of sparse signals via conic programming. Biometrika, 2012. 3. L. Wang. L1 penalized LAD estimator for high dimensional linear regression. Technical Report, 2012. 4. J. Liu and J. Ye. Efficient L1/Lq Norm Regularization. Technical Report, 2010. 5. B. He and X. Yuan. On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers. Technical Report, 2012.

See Also

flare-package.

Examples

Run this code
## generate data
set.seed(1)
n=100
d=200
d1=10
rho0=0.3
lambda=c(3:1)*sqrt(log(d)/n)
Sigma=matrix(0,nrow=d,ncol=d)
Sigma[1:d1,1:d1]=rho0
diag(Sigma)=1
mu=rep(0,d)
X=mvrnorm(n=n,mu=mu,Sigma=Sigma)
eps=rt(n=n,df=n-1)
beta=c(rep(sqrt(1/3),3),rep(0,d-3))
Y=X%*%beta+eps

## Regression with "dantzig" and general "lq" respectively
out1=flare.slim(X=X,Y=Y,lambda=lambda,method = "dantzig")
out2=flare.slim(X=X,Y=Y,lambda=lambda,method = "lq",q=1.5)

## Print results
print(out1)
print(out2)
plot(out1)

Run the code above in your browser using DataLab