Learn R Programming

ADMMsigma (version 2.1)

ADMMc: Penalized precision matrix estimation via ADMM (c++)

Description

Penalized precision matrix estimation using the ADMM algorithm

Usage

ADMMc(S, initOmega, initZ, initY, lam, alpha = 1, diagonal = FALSE,
  rho = 2, mu = 10, tau_inc = 2, tau_dec = 2, crit = "ADMM",
  tol_abs = 1e-04, tol_rel = 1e-04, maxit = 10000L)

Arguments

S

pxp sample covariance matrix (denominator n).

initOmega

initialization matrix for Omega

initZ

initialization matrix for Z

initY

initialization matrix for Y

lam

postive tuning parameter for elastic net penalty.

alpha

elastic net mixing parameter contained in [0, 1]. 0 = ridge, 1 = lasso. Defaults to alpha = 1.

diagonal

option to penalize the diagonal elements of the estimated precision matrix (\(\Omega\)). Defaults to FALSE.

rho

initial step size for ADMM algorithm.

mu

factor for primal and residual norms in the ADMM algorithm. This will be used to adjust the step size rho after each iteration.

tau_inc

factor in which to increase step size rho.

tau_dec

factor in which to decrease step size rho.

crit

criterion for convergence (ADMM or loglik). If crit = loglik then iterations will stop when the relative change in log-likelihood is less than tol.abs. Default is ADMM and follows the procedure outlined in Boyd, et al.

tol_abs

absolute convergence tolerance. Defaults to 1e-4.

tol_rel

relative convergence tolerance. Defaults to 1e-4.

maxit

maximum number of iterations. Defaults to 1e4.

Value

returns list of returns which includes:

Iterations

number of iterations.

lam

optimal tuning parameters.

alpha

optimal tuning parameter.

Omega

estimated penalized precision matrix.

Z2

estimated Z matrix.

Y

estimated Y matrix.

rho

estimated rho.

Details

For details on the implementation of 'ADMMsigma', see the vignette https://mgallow.github.io/ADMMsigma/.

References

  • Boyd, Stephen, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, and others. 2011. 'Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.' Foundations and Trends in Machine Learning 3 (1). Now Publishers, Inc.: 1-122. https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf

  • Hu, Yue, Chi, Eric C, amd Allen, Genevera I. 2016. 'ADMM Algorithmic Regularization Paths for Sparse Statistical Machine Learning.' Splitting Methods in Communication, Imaging, Science, and Engineering. Springer: 433-459.

  • Zou, Hui and Hastie, Trevor. 2005. "Regularization and Variable Selection via the Elastic Net." Journal of the Royal Statistial Society: Series B (Statistical Methodology) 67 (2). Wiley Online Library: 301-320.

  • Rothman, Adam. 2017. 'STAT 8931 notes on an algorithm to compute the Lasso-penalized Gaussian likelihood precision matrix estimator.'