The primary goal of GGMncv is to provide non-convex penalties for estimating Gaussian graphical models. These are known to overcome the various limitations of lasso (least absolute shrinkage "screening" operator), including inconsistent model selection zhao2006modelGGMncv, biased estimates zhang2010nearlyGGMncv, and a high false positive rate @see for example @williams2020back;@williams2019nonregularizedGGMncv
Several of the penalties are (continuous) approximations to the _0 penalty, that is, best subset selection. However, the solution does not require enumerating all possible models which results in a computationally efficient solution.
L0 Approximations
Atan: penalty = "atan"
wang2016variableGGMncv.
This is currently the default.
Seamless _0: penalty = "selo"
dicker2013variableGGMncv.
Exponential: penalty = "exp"
wang2018variableGGMncv
Log: penalty = "log"
mazumder2011sparsenetGGMncv.
Sica: penalty = "sica"
lv2009unifiedGGMncv
Additional penalties:
SCAD: penalty = "scad"
fan2001variableGGMncv.
MCP: penalty = "mcp"
zhang2010nearlyGGMncv.
Adaptive lasso: penalty = "adapt"
zou2006adaptiveGGMncv.
Lasso: penalty = "lasso"
tibshirani1996regressionGGMncv.
Citing GGMncv
It is important to note that GGMncv merely provides a software implementation
of other researchers work. There are no methodological innovations,
although this is the most comprehensive R package for estimating GGMs
with non-convex penalties. Hence, in addition to citing the
package citation("GGMncv")
, it is important to give credit to the primary
sources. The references are provided above and in ggmncv
.
Further, a survey (or review) of these penalties can be found in williams2020beyond;textualGGMncv.