Lorenz.FABS
solves the penalized Lorenz regression with (adaptive) Lasso penalty on a grid of lambda values.
For each value of lambda, the function returns estimates for the vector of parameters and for the estimated explained Gini coefficient, as well as the Lorenz-\(R^2\) of the regression.
Lorenz.FABS(
y,
x,
standardize = TRUE,
weights = NULL,
kernel = 1,
h = length(y)^(-1/5.5),
gamma = 0.05,
lambda = "Shi",
w.adaptive = NULL,
eps = 0.005,
iter = 10^4,
lambda.min = 1e-07
)
A list with several components:
lambda
vector gathering the different values of the regularization parameter
theta
matrix where column i provides the vector of estimated coefficients corresponding to the value lambda[i]
of the regularization parameter.
LR2
vector where element i provides the Lorenz-\(R^2\) attached to the value lambda[i]
of the regularization parameter.
Gi.expl
vector where element i provides the estimated explained Gini coefficient related to the value lambda[i]
of the regularization parameter.
a vector of responses
a matrix of explanatory variables
Should the variables be standardized before the estimation process? Default value is TRUE.
vector of sample weights. By default, each observation is given the same weight.
integer indicating what kernel function to use. The value 1 is the default and implies the use of an Epanechnikov kernel while the value of 2 implies the use of a biweight kernel.
bandwidth of the kernel, determining the smoothness of the approximation of the indicator function. Default value is n^(-1/5.5) where n is the sample size.
value of the Lagrange multiplier in the loss function
this parameter relates to the regularization parameter. Several options are available.
grid
If lambda="grid"
, lambda is defined on a grid, equidistant in the logarithmic scale.
Shi
If lambda="Shi"
, lambda, is defined within the algorithm, as in Shi et al (2018).
supplied
If the user wants to supply the lambda vector himself
vector of size equal to the number of covariates where each entry indicates the weight in the adaptive Lasso. By default, each covariate is given the same weight (Lasso).
step size in the FABS algorithm. Default value is 0.005.
maximum number of iterations. Default value is 10^4.
lower bound of the penalty parameter. Only used if lambda="Shi"
.
The regression is solved using the FABS algorithm developed by Shi et al (2018) and adapted to our case. For a comprehensive explanation of the Penalized Lorenz Regression, see Jacquemain et al. In order to ensure identifiability, theta is forced to have a L2-norm equal to one.
Jacquemain, A., C. Heuchenne, and E. Pircalabelu (2024). A penalised bootstrap estimation procedure for the explained Gini coefficient. Electronic Journal of Statistics 18(1) 247-300.
Shi, X., Y. Huang, J. Huang, and S. Ma (2018). A Forward and Backward Stagewise Algorithm for Nonconvex Loss Function with Adaptive Lasso, Computational Statistics & Data Analysis 124, 235-251.
Lorenz.Reg
, Lorenz.SCADFABS
data(Data.Incomes)
y <- Data.Incomes[,1]
x <- as.matrix(Data.Incomes[,-c(1,2)])
Lorenz.FABS(y, x)
Run the code above in your browser using DataLab