Fits regularization paths for group-lasso penalized learning problems at a sequence of regularization parameters lambda.
gglasso(
x,
y,
group = NULL,
loss = c("ls", "logit", "sqsvm", "hsvm", "wls"),
nlambda = 100,
lambda.factor = ifelse(nobs < nvars, 0.05, 0.001),
lambda = NULL,
pf = sqrt(bs),
weight = NULL,
dfmax = as.integer(max(group)) + 1,
pmax = min(dfmax * 1.2, as.integer(max(group))),
eps = 1e-08,
maxit = 3e+08,
delta,
intercept = TRUE
)
An object with S3 class gglasso
.
the call that produced this object
intercept sequence of length
length(lambda)
a p*length(lambda)
matrix of
coefficients.
the number of nonzero groups for each value of
lambda
.
dimension of coefficient matrix (ices)
the actual sequence of lambda
values used
total number of iterations (the most inner loop) summed over all lambda values
error flag, for warnings and errors, 0 if no error.
a vector of consecutive integers describing the grouping of the coefficients.
matrix of predictors, of dimension \(n \times p\); each row is an observation vector.
response variable. This argument should be quantitative for regression (least squares), and a two-level factor for classification (logistic model, huberized SVM, squared SVM).
a vector of consecutive integers describing the grouping of the coefficients (see example below).
a character string specifying the loss function to use, valid options are:
"ls"
least squares loss (regression),
"logit"
logistic loss (classification).
"hsvm"
Huberized squared hinge loss (classification),
"sqsvm"
Squared
hinge loss (classification),
Default is "ls"
.
the number of lambda
values - default is 100.
the factor for getting the minimal lambda in
lambda
sequence, where min(lambda)
= lambda.factor
*
max(lambda)
. max(lambda)
is the smallest value of
lambda
for which all coefficients are zero. The default depends on
the relationship between \(n\) (the number of rows in the matrix of
predictors) and \(p\) (the number of predictors). If \(n >= p\), the
default is 0.001
, close to zero. If \(n<p\), the default is
0.05
. A very small value of lambda.factor
will lead to a
saturated fit. It takes no effect if there is user-defined lambda
sequence.
a user supplied lambda
sequence. Typically, by leaving
this option unspecified users can have the program compute its own
lambda
sequence based on nlambda
and lambda.factor
.
Supplying a value of lambda
overrides this. It is better to supply a
decreasing sequence of lambda
values than a single (small) value, if
not, the program will sort user-defined lambda
sequence in
decreasing order automatically.
penalty factor, a vector in length of bn (bn is the total number of groups). Separate penalty weights can be applied to each group of \(\beta\)s to allow differential shrinkage. Can be 0 for some groups, which implies no shrinkage, and results in that group always being included in the model. Default value for each entry is the square-root of the corresponding size of each group.
a \(nxn\) observation weight matrix in the where \(n\) is
the number of observations. Only used if loss='wls'
is specified.
Note that cross-validation is NOT IMPLEMENTED for loss='wls'
.
limit the maximum number of groups in the model. Useful for very
large bs
(group size), if a partial path is desired. Default is
bs+1
.
limit the maximum number of groups ever to be nonzero. For
example once a group enters the model, no matter how many times it exits or
re-enters model through the path, it will be counted only once. Default is
min(dfmax*1.2,bs)
.
convergence termination tolerance. Defaults value is 1e-8
.
maximum number of outer-loop iterations allowed at fixed lambda
value. Default is 3e8. If models do not converge, consider increasing
maxit
.
the parameter \(\delta\) in "hsvm"
(Huberized
squared hinge loss). Default is 1.
Whether to include intercept in the model. Default is TRUE.
Yi Yang and Hui Zou
Maintainer: Yi Yang <yi.yang6@mcgill.ca>
Note that the objective function for "ls"
least squares is
$$RSS/(2*n) + lambda * penalty;$$ for "hsvm"
Huberized squared
hinge loss, "sqsvm"
Squared hinge loss and "logit"
logistic
regression, the objective function is $$-loglik/n + lambda * penalty.$$
Users can also tweak the penalty by choosing different penalty factor.
For computing speed reason, if models are not converging or running slow,
consider increasing eps
, decreasing nlambda
, or increasing
lambda.factor
before increasing maxit
.
Yang, Y. and Zou, H. (2015), ``A Fast Unified Algorithm for
Computing Group-Lasso Penalized Learning Problems,'' Statistics and
Computing. 25(6), 1129-1141.
BugReport:
https://github.com/emeryyi/gglasso
plot.gglasso
# load gglasso library
library(gglasso)
# load bardet data set
data(bardet)
# define group index
group1 <- rep(1:20,each=5)
# fit group lasso penalized least squares
m1 <- gglasso(x=bardet$x,y=bardet$y,group=group1,loss="ls")
# load colon data set
data(colon)
# define group index
group2 <- rep(1:20,each=5)
# fit group lasso penalized logistic regression
m2 <- gglasso(x=colon$x,y=colon$y,group=group2,loss="logit")
Run the code above in your browser using DataLab