scalreg (version 1.0)

scalreg: Scaled sparse linear regression

Description

The algorithm gives the scaled Lasso solution with given penalty constants for a sparse linear regression. When the response vector is not set, the algorithm estimates the precision matrix of predictors.

Usage

scalreg(X, y, lam0 = NULL, LSE = FALSE)

Arguments

X

predictors, an n by p matrix with n > 1 and p > 1.

y

response, an n-vector with n > 1. If NULL, the algorithm computes the precision matrix of predictors.

lam0

penalty constant; c("univ","quantile") or other specified numerical value. If p < 10^6, default is "quantile"; otherwise, default is "univ".

LSE

If TRUE, compute least squares estimates after scaled Lasso selection. Default is FALSE.

Value

A "scalreg" object is returned. If it is a linear regression solution, some significant components of the object are:

type

"regression".

hsigma

the estimated noise level.

coefficients

the estimated coefficients.

fitted.values

the fitted mean values.

residuals

the residuals, that is response minus fitted values.

lse

the object of least square estimation after the selection, which includes the similar values as "scalreg" (e.g. hsigma, coefficients, fitted.values, residual).

If it estimates a precition matrix, some significant components of the object are:

type

"precision matrix".

precision

the estimated precision matrix.

hsigma

the estimated noise level for the linear regression problem of each column.

lse

the object of least square estimation, containing values of precision and hsigma.

Details

Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model, described in details in Sun and Zhang (2012). It alternates between estimating the noise level via the mean residual square and scaling the penalty in proportion to the estimated noise level. The theoretical performance of scaled Lasso with lam0="univ" was proven in Sun and Zhang (2012), while the quantile-based penalty level (lam0="quantile") was introduced and studied in Sun and Zhang (2013).

Precision matrix estimation was described in details in Sun and Zhang (2013). The algorithm first estimates each column of the matrix by scaled sparse linear regression and then adjusts the matrix estimator to be symmetric.

References

Sun, T. and Zhang, C.-H. (2012) Scaled sparse linear regression. Biometrika, 99 (4), 879-898.

Sun, T. and Zhang, C.-H. (2013) Sparse matrix inversion with scaled Lasso. Journal of Machine Learning Research, 14, 3385-3418.

Examples

Run this code
# NOT RUN {
data(sp500)
attach(sp500)
x = sp500.percent[,3: (dim(sp500.percent)[2])]
y = sp500.percent[,1]

object = scalreg(x,y)
##print(object)

object = scalreg(x,y,LSE=TRUE)
print(object$hsigma)
print(object$lse$hsigma)

detach(sp500)
# }

Run the code above in your browser using DataCamp Workspace