Jerome Friedman

Jerome Friedman

11 packages on CRAN

acepack

cran
99.99th

Percentile

Two nonparametric methods for multiple regression transform selection are provided. The first, Alternative Conditional Expectations (ACE), is an algorithm to find the fixed point of maximal correlation, i.e. it finds a set of transformed response variables that maximizes R^2 using smoothing functions [see Breiman, L., and J.H. Friedman. 1985. "Estimating Optimal Transformations for Multiple Regression and Correlation". Journal of the American Statistical Association. 80:580-598. <doi:10.1080/01621459.1985.10478157>]. Also included is the Additivity Variance Stabilization (AVAS) method which works better than ACE when correlation is low [see Tibshirani, R.. 1986. "Estimating Transformations for Regression via Additivity and Variance Stabilization". Journal of the American Statistical Association. 83:394-405. <doi:10.1080/01621459.1988.10478610>]. A good introduction to these two methods is in chapter 16 of Frank Harrel's "Regression Modeling Strategies" in the Springer Series in Statistics.

glasso

cran
99.99th

Percentile

Estimation of a sparse inverse covariance matrix using a lasso (L1) penalty. Facilities are provided for estimates along a path of values for the regularization parameter.

glmnet

cran
99.99th

Percentile

Extremely efficient procedures for fitting the entire lasso or elastic-net regularization path for linear regression, logistic and multinomial regression models, Poisson regression, Cox model, multiple-response Gaussian, and the grouped multinomial regression. There are two new and important additions. The family argument can be a GLM family object, which opens the door to any programmed family. This comes with a modest computational cost, so when the built-in families suffice, they should be used instead. The other novelty is the relax option, which refits each of the active sets in the path unpenalized. The algorithm uses cyclical coordinate descent in a path-wise fashion, as described in the papers listed in the URL below.

99.99th

Percentile

Ordered lasso and time-lag sparse regression. Ordered Lasso fits a linear model and imposes an order constraint on the coefficients. It writes the coefficients as positive and negative parts, and requires positive parts and negative parts are non-increasing and positive. Time-Lag Lasso generalizes the ordered Lasso to a general data matrix with multiple predictors. For more details, see Suo, X.,Tibshirani, R., (2014) 'An Ordered Lasso and Sparse Time-lagged Regression'.

owl

cran
99.99th

Percentile

Efficient implementations for Sorted L-One Penalized Estimation (SLOPE): generalized linear models regularized with the sorted L1-norm (Bogdan et al. (2015) <doi:10/gfgwzt>) or, equivalently, ordered weighted L1-norm (OWL). Supported models include ordinary least-squares regression, binomial regression, multinomial regression, and Poisson regression. Both dense and sparse predictor matrices are supported. In addition, the package features predictor screening rules that enable fast and efficient solutions to high-dimensional problems.

pcLasso

cran
99.99th

Percentile

A method for fitting the entire regularization path of the principal components lasso for linear and logistic regression models. The algorithm uses cyclic coordinate descent in a path-wise fashion. See URL below for more information on the algorithm. See Tay, K., Friedman, J. ,Tibshirani, R., (2014) 'Principal component-guided sparse regression' <arXiv:1810.04651>.

pliable

cran
99.99th

Percentile

Fits a pliable lasso model. For details see Tibshirani and Friedman (2018) <arXiv:1712.00484>.

SGL

cran
99.99th

Percentile

Fit a regularized generalized linear model via penalized maximum likelihood. The model is fit for a path of values of the penalty parameter. Fits linear, logistic and Cox models.

SLOPE

cran
99.99th

Percentile

Efficient implementations for Sorted L-One Penalized Estimation (SLOPE): generalized linear models regularized with the sorted L1-norm (Bogdan et al. (2015) <doi:10/gfgwzt>). Supported models include ordinary least-squares regression, binomial regression, multinomial regression, and Poisson regression. Both dense and sparse predictor matrices are supported. In addition, the package features predictor screening rules that enable fast and efficient solutions to high-dimensional problems.

sparsenet

cran
99.99th

Percentile

Efficient procedure for fitting regularization paths between L1 and L0, using the MC+ penalty of Zhang, C.H. (2010)<doi:10.1214/09-AOS729>. Implements the methodology described in Mazumder, Friedman and Hastie (2011) <DOI: 10.1198/jasa.2011.tm09738>. Sparsenet computes the regularization surface over both the family parameter and the tuning parameter by coordinate descent.

TrioSGL

cran
99.99th

Percentile

Fit a trio model via penalized maximum likelihood. The model is fit for a path of values of the penalty parameter. This package is based on Noah Simon, et al. (2011) <doi:10.1080/10618600.2012.681250>.