Sparse-group SLOPE (SGS) main fitting function. Supports both linear and logistic regression, both with dense and sparse matrix implementations.
fit_sgs(
X,
y,
groups,
type = "linear",
lambda = "path",
path_length = 20,
min_frac = 0.05,
alpha = 0.95,
vFDR = 0.1,
gFDR = 0.1,
pen_method = 1,
max_iter = 5000,
backtracking = 0.7,
max_iter_backtracking = 100,
tol = 1e-05,
standardise = "l2",
intercept = TRUE,
screen = TRUE,
verbose = FALSE,
w_weights = NULL,
v_weights = NULL
)
A list containing:
The fitted values from the regression. Taken to be the more stable fit between x
and z
, which is usually the former. A filter is applied to remove very small values, where ATOS has not been able to shrink exactly to zero. Check this against x
and z
.
The group values from the regression. Taken by applying the \(\ell_2\) norm within each group on beta
.
A list containing the indicies of the active/selected variables for each "lambda"
value. Index 1 corresponds to the first column in X.
A list containing the indicies of the active/selected groups for each "lambda"
value. Index 1 corresponds to the first group in the groups
vector. You can see the group order by running unique(groups)
.
Number of iterations performed. If convergence is not reached, this will be max_iter
.
Logical flag indicating whether ATOS converged, according to tol
.
Final value of convergence criteria.
The solution to the original problem (see Pedregosa and Gidel (2018)).
The updated values from applying the first proximal operator (see Pedregosa and Gidel (2018)).
The solution to the dual problem (see Pedregosa and Gidel (2018)).
List of variables that were kept after screening step for each "lambda"
value. (corresponds to \(\mathcal{S}_v\) in Feser and Evangelou (2024)).
List of groups that were kept after screening step for each "lambda"
value. (corresponds to \(\mathcal{S}_g\) in Feser and Evangelou (2024)).
List of variables that were used for fitting after screening for each "lambda"
value. (corresponds to \(\mathcal{E}_v\) in Feser and Evangelou (2024)).
List of groups that were used for fitting after screening for each "lambda"
value. (corresponds to \(\mathcal{E}_g\) in Feser and Evangelou (2024)).
List of variables that violated the KKT conditions each "lambda"
value. (corresponds to \(\mathcal{K}_v\) in Feser and Evangelou (2024)).
List of groups that violated the KKT conditions each "lambda"
value. (corresponds to \(\mathcal{K}_g\) in Feser and Evangelou (2024)).
Vector of the variable penalty sequence.
Vector of the group penalty sequence.
Logical flag indicating whether screening was performed.
Indicates which type of regression was performed.
Logical flag indicating whether an intercept was fit.
Value(s) of \(\lambda\) used to fit the model.
Input matrix of dimensions \(n \times p\). Can be a sparse matrix (using class "sparseMatrix"
from the Matrix
package).
Output vector of dimension \(n\). For type="linear"
should be continuous and for type="logistic"
should be a binary variable.
A grouping structure for the input data. Should take the form of a vector of group indices.
The type of regression to perform. Supported values are: "linear"
and "logistic"
.
The regularisation parameter. Defines the level of sparsity in the model. A higher value leads to sparser models:
"path"
computes a path of regularisation parameters of length "path_length"
. The path will begin just above the value at which the first predictor enters the model and will terminate at the value determined by "min_frac"
.
User-specified single value or sequence. Internal scaling is applied based on the type of standardisation. The returned "lambda"
value will be the original unscaled value(s).
The number of \(\lambda\) values to fit the model for. If "lambda"
is user-specified, this is ignored.
Smallest value of \(\lambda\) as a fraction of the maximum value. That is, the final \(\lambda\) will be "min_frac"
of the first \(\lambda\) value.
The value of \(\alpha\), which defines the convex balance between SLOPE and gSLOPE. Must be between 0 and 1. Recommended value is 0.95.
Defines the desired variable false discovery rate (FDR) level, which determines the shape of the variable penalties. Must be between 0 and 1.
Defines the desired group false discovery rate (FDR) level, which determines the shape of the group penalties. Must be between 0 and 1.
The type of penalty sequences to use (see Feser and Evangelou (2023)):
"1"
uses the vMean SGS and gMean gSLOPE sequences.
"2"
uses the vMax SGS and gMean gSLOPE sequences.
"3"
uses the BH SLOPE and gMean gSLOPE sequences, also known as SGS Original.
Maximum number of ATOS iterations to perform.
The backtracking parameter, \(\tau\), as defined in Pedregosa and Gidel (2018).
Maximum number of backtracking line search iterations to perform per global iteration.
Convergence tolerance for the stopping criteria.
Type of standardisation to perform on X
:
"l2"
standardises the input data to have \(\ell_2\) norms of one. When using this "lambda"
is scaled internally by \(1/\sqrt{n}\).
"l1"
standardises the input data to have \(\ell_1\) norms of one. When using this "lambda"
is scaled internally by \(1/n\).
"sd"
standardises the input data to have standard deviation of one.
"none"
no standardisation applied.
Logical flag for whether to fit an intercept.
Logical flag for whether to apply screening rules (see Feser and Evangelou (2024)). Screening discards irrelevant groups before fitting, greatly improving speed.
Logical flag for whether to print fitting information.
Optional vector for the group penalty weights. Overrides the penalties from pen_method
if specified. When entering custom weights, these are multiplied internally by \(\lambda\) and \(1-\alpha\). To void this behaviour, set \(\lambda = 2\) and \(\alpha = 0.5\).
Optional vector for the variable penalty weights. Overrides the penalties from pen_method
if specified. When entering custom weights, these are multiplied internally by \(\lambda\) and \(\alpha\). To void this behaviour, set \(\lambda = 2\) and \(\alpha = 0.5\).
fit_sgs()
fits an SGS model (Feser and Evangelou (2023)) using adaptive three operator splitting (ATOS). SGS is a sparse-group method, so that it selects both variables and groups. Unlike group selection approaches, not every variable within a group is set as active.
It solves the convex optimisation problem given by
$$
\frac{1}{2n} f(b ; y, \mathbf{X}) + \lambda \alpha \sum_{i=1}^{p}v_i |b|_{(i)} + \lambda (1-\alpha)\sum_{g=1}^{m}w_g \sqrt{p_g} \|b^{(g)}\|_2,
$$
where \(f(\cdot)\) is the loss function and \(p_g\) are the group sizes. The penalty parameters in SGS are sorted so that the largest coefficients are matched with the largest penalties, to reduce the FDR.
For the variables: \(|\beta|_{(1)}\geq \ldots \geq |\beta|_{(p)}\) and \(v_1 \geq \ldots \geq v_p \geq 0\).
For the groups: \(\sqrt{p_1}\|\beta^{(1)}\|_2 \geq \ldots\geq \sqrt{p_m}\|\beta^{(m)}\|_2\) and \(w_1\geq \ldots \geq w_g \geq 0\).
In the case of the linear model, the loss function is given by the mean-squared error loss:
$$
f(b; y, \mathbf{X}) = \left\|y-\mathbf{X}b \right\|_2^2.
$$
In the logistic model, the loss function is given by
$$
f(b;y,\mathbf{X})=-1/n \log(\mathcal{L}(b; y, \mathbf{X})).
$$
where the log-likelihood is given by
$$
\mathcal{L}(b; y, \mathbf{X}) = \sum_{i=1}^{n}\left\{y_i b^\intercal x_i - \log(1+\exp(b^\intercal x_i)) \right\}.
$$
SGS can be seen to be a convex combination of SLOPE and gSLOPE, balanced through alpha
, such that it reduces to SLOPE for alpha = 1
and to gSLOPE for alpha = 0
.
The penalty parameters in SGS are sorted so that the largest coefficients are matched with the largest penalties, to reduce the FDR.
For the group penalties, see fit_gslope()
. For the variable penalties, the vMean SGS sequence (pen_method=1
) (Feser and Evangelou (2023)) is given by
$$
v_i^{\text{mean}} = \overline{F}_{\mathcal{N}}^{-1} \left( 1 - \frac{q_v i}{2p} \right), \; \text{where} \; \overline{F}_{\mathcal{N}}(x) := \frac{1}{m} \sum_{j=1}^{m} F_{\mathcal{N}} \left( \alpha x + \frac{1}{3} (1-\alpha) a_j w_j \right),\; i = 1,\ldots,p,
$$
where \(F_\mathcal{N}\) is the cumulative distribution functions of a standard Gaussian distribution. The vMax SGS sequence (pen_method=2
) (Feser and Evangelou (2023)) is given by
$$
v_i^{\text{max}} = \max_{j=1,\dots,m} \left\{ \frac{1}{\alpha} F_{\mathcal{N}}^{-1} \left(1 - \frac{q_v i}{2p}\right) - \frac{1}{3\alpha}(1-\alpha) a_j w_j \right\},
$$
The BH SLOPE sequence (pen_method=3
) (Bogdan et al. (2015)) is given by
$$
v_i = z(1-i q_v/2p),
$$
where \(z\) is the quantile function of a standard normal distribution.
Bogdan, M., van den Berg, E., Sabatti, C., Candes, E. (2015). SLOPE - Adaptive variable selection via convex optimization, https://projecteuclid.org/journals/annals-of-applied-statistics/volume-9/issue-3/SLOPEAdaptive-variable-selection-via-convex-optimization/10.1214/15-AOAS842.full
Feser, F., Evangelou, M. (2023). Sparse-group SLOPE: adaptive bi-level selection with FDR-control, https://arxiv.org/abs/2305.09467
Feser, F., Evangelou, M. (2024). Strong screening rules for group-based SLOPE models, https://arxiv.org/abs/2405.15357
Pedregosa, F., Gidel, G. (2018). Adaptive Three Operator Splitting, https://proceedings.mlr.press/v80/pedregosa18a.html
Other SGS-methods:
as_sgs()
,
coef.sgs()
,
fit_sgo()
,
fit_sgo_cv()
,
fit_sgs_cv()
,
plot.sgs()
,
predict.sgs()
,
print.sgs()
,
scaled_sgs()
# specify a grouping structure
groups = c(1,1,1,2,2,3,3,3,4,4)
# generate data
data = gen_toy_data(p=10, n=5, groups = groups, seed_id=3,group_sparsity=1)
# run SGS
model = fit_sgs(X = data$X, y = data$y, groups = groups, type="linear", path_length = 5,
alpha=0.95, vFDR=0.1, gFDR=0.1, standardise = "l2", intercept = TRUE, verbose=FALSE)
Run the code above in your browser using DataLab