Penalized precision matrix estimation using the ADMM algorithm
ADMMc(S, A, B, C, initOmega, initZ, initY, lam, alpha = 1, tau = 10,
rho = 2, mu = 10, tau_rho = 2, iter_rho = 10L, crit = "ADMM",
tol_abs = 1e-04, tol_rel = 1e-04, maxit = 10000L)pxp sample covariance matrix (denominator n).
option to provide user-specified matrix for penalty term. This matrix must have p columns. Defaults to identity matrix.
option to provide user-specified matrix for penalty term. This matrix must have p rows. Defaults to identity matrix.
option to provide user-specified matrix for penalty term. This matrix must have nrow(A) rows and ncol(B) columns. Defaults to identity matrix.
initialization matrix for Omega
initialization matrix for Z2
initialization matrix for Y
postive tuning parameter for elastic net penalty.
elastic net mixing parameter contained in [0, 1]. 0 = ridge, 1 = lasso. Alpha must be a single value (cross validation across alpha not supported).
optional constant used to ensure positive definiteness in Q matrix in algorithm
initial step size for ADMM algorithm.
factor for primal and residual norms in the ADMM algorithm. This will be used to adjust the step size rho after each iteration.
factor in which to increase step size rho.
step size rho will be updated every iter.rho steps
criterion for convergence (ADMM or loglik). If crit = loglik then iterations will stop when the relative change in log-likelihood is less than tol.abs. Default is ADMM and follows the procedure outlined in Boyd, et al.
absolute convergence tolerance. Defaults to 1e-4.
relative convergence tolerance. Defaults to 1e-4.
maximum number of iterations. Defaults to 1e4.
returns list of returns which includes:
number of iterations.
optimal tuning parameter.
estimated penalized precision matrix.
estimated Z matrix.
estimated Y matrix.
estimated rho.
For details on the implementation of 'ADMMsigma', see the vignette https://mgallow.github.io/SCPME/.
Boyd, Stephen, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, and others. 2011. 'Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.' Foundations and Trends in Machine Learning 3 (1). Now Publishers, Inc.: 1-122. https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf
Hu, Yue, Chi, Eric C, amd Allen, Genevera I. 2016. 'ADMM Algorithmic Regularization Paths for Sparse Statistical Machine Learning.' Splitting Methods in Communication, Imaging, Science, and Engineering. Springer: 433-459.
Molstad, Aaron J., and Adam J. Rothman. (2017). 'Shrinking Characteristics of Precision Matrix Estimators. Biometrika.. https://doi.org/10.1093/biomet/asy023
Rothman, Adam. 2017. 'STAT 8931 notes on an algorithm to compute the Lasso-penalized Gaussian likelihood precision matrix estimator.'