# lambda00

0th

Percentile

##### Upper limit of the penalty parameter for family="binomial"

Use bivariate winsorization to estimate the smallest value of the upper limit for the penalty parameter.

Keywords
robust
##### Usage
lambda00(x,y,normalize=TRUE,intercept=TRUE,const=2,prob=0.95,
tol=.Machine$double.eps^0.5,eps=.Machine$double.eps,...)
##### Arguments
x

a numeric matrix containing the predictor variables.

y

a numeric vector containing the response variable.

normalize

a logical indicating whether the winsorized predictor variables should be normalized or not (the default is TRUE).

intercept

a logical indicating whether a constant term should be included in the model (the default is TRUE).

const

numeric; tuning constant to be used in univariate winsorization (the default is 2).

prob

numeric; probability for the quantile of the $\chi^{2}$ distribution to be used in bivariate winsorization (the default is 0.95).

tol

a small positive numeric value used to determine singularity issues in the computation of correlation estimates for bivariate winsorization.

eps

a small positive numeric value used to determine whether the robust scale estimate of a variable is too small (an effective zero).

##### Details

The estimation procedure is done with similar approach as in Alfons et al. (2013). But the Pearson correlation between y and the jth predictor variable xj on winsorized data is replaced to a robustified point-biserial correlation for logistic regression.

##### Value

A robust estimate of the smallest value of the penalty parameter for enetLTS regression (for family="binomial").

##### Note

For linear regression, we take exactly same procedure as in Alfons et al., which is based on the Pearson correlation between y and the jth predictor variable xj on winsorized data. See Alfons et al. (2013).

##### References

Kurnaz, F.S., Hoffmann, I. and Filzmoser, P. (2017) Robust and sparse estimation methods for high dimensional linear and logistic regression. Chemometrics and Intelligent Laboratory Systems.

Alfons, A., Croux, C. and Gelper, S. (2013) Sparse least trimmed squares regression for analyzing high-dimensional large data sets. The Annals of Applied Statistics, 7(1), 226--248.

enetLTS, sparseLTS, lambda0

• lambda00
##### Examples
# NOT RUN {
set.seed(86)
n <- 100; p <- 25                             # number of observations and variables
beta <- rep(0,p); beta[1:6] <- 1              # 10% nonzero coefficients
sigma <- 0.5                                  # controls signal-to-noise ratio
x <- matrix(rnorm(n*p, sigma),nrow=n)
e <- rnorm(n,0,1)                             # error terms
eps <-0.05                                    # %10 contamination to only class 0
m <- ceiling(eps*n)
y <- sample(0:1,n,replace=TRUE)
xout <- x
xout[y==0,][1:m,] <- xout[1:m,] + 10;         # class 0
yout <- y                                     # wrong classification for vertical outliers

# compute smallest value of the upper limit for the penalty parameter
l00 <- lambda00(xout,yout)
# }

Documentation reproduced from package enetLTS, version 0.1.0, License: GPL (>= 3)

### Community examples

Looks like there are no examples yet.