Makes a plot or returns a data frame containing the group elastic net penalty (or its derivative) evaluated at a sequence of input values.
visualize.penalty(x = seq(-5, 5, length.out = 1001),
penalty = c("LASSO", "MCP", "SCAD"),
alpha = 1,
lambda = 1,
gamma = 4,
derivative = FALSE,
plot = TRUE,
subtitle = TRUE,
legend = TRUE,
location = ifelse(derivative, "bottom", "top"),
...)
If plot = TRUE
, then produces a plot.
If plot = FALSE
, then returns a data frame.
sequence of values at which to evaluate the penalty.
which penalty or penalties should be plotted?
elastic net tuning parameter (between 0 and 1).
overall tuning parameter (non-negative).
additional hyperparameter for MCP (>1) or SCAD (>2).
if FALSE
(default), then the penalty is plotted; otherwise the derivative of the penalty is plotted.
if TRUE
(default), then the result is plotted; otherwise the result is returned as a data frame.
if TRUE
(default), then the hyperparameter values are displayed in the subtitle.
if TRUE
(default), then a legend is included to distinguish the different penalty
types.
the legend's location; ignored if legend = FALSE
.
addition arguments passed to plot
function, e.g., xlim
, ylim
, etc.
Nathaniel E. Helwig <helwig@umn.edu>
The group elastic net penalty is defined as
$$P_{\alpha, \lambda}(\boldsymbol\beta) = Q_{\lambda_1}(\|\boldsymbol\beta\|) + \frac{\lambda_2}{2} \|\boldsymbol\beta\|^2$$
where \(Q_\lambda()\) denotes the L1 penalty (LASSO, MCP, or SCAD), \(\| \boldsymbol\beta \| = (\boldsymbol\beta^\top \boldsymbol\beta)^{1/2}\) denotes the Euclidean norm, \(\lambda_1 = \lambda \alpha\) is the L1 tuning parameter, and \(\lambda_2 = \lambda (1-\alpha)\) is the L2 tuning parameter. Note that \(\lambda\) and \(\alpha\) denote the lambda
and alpha
arguments.
Fan J, & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348-1360. tools:::Rd_expr_doi("10.1198/016214501753382273")
Helwig, N. E. (2024). Versatile descent algorithms for group regularization and variable selection in generalized linear models. Journal of Computational and Graphical Statistics. tools:::Rd_expr_doi("10.1080/10618600.2024.2362232")
Tibshirani, R. (1996). Regression and shrinkage via the Lasso. Journal of the Royal Statistical Society, Series B, 58, 267-288. tools:::Rd_expr_doi("10.1111/j.2517-6161.1996.tb02080.x")
Zhang CH (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2), 894-942. tools:::Rd_expr_doi("10.1214/09-AOS729")
visualize.shrink
for plotting shrinkage operator
# plot penalty functions
visualize.penalty()
# plot penalty derivatives
visualize.penalty(derivative = TRUE)
Run the code above in your browser using DataLab