50% off | Unlimited Data & AI Learning

Last chance! 50% off unlimited learning

Sale ends in


lqa (version 1.0-2)

ao: Approximated Octagon Penalty

Description

Object of the penalty class to handle the AO penalty (Ulbricht, 2010).

Usage

ao (lambda = NULL, ...)

Arguments

lambda
two dimensional tuning parameter parameter. The first component corresponds to the regularization parameter $\lambda$. This must be a nonnegative real number. The second component indicates the exponent $\gamma$ of the bridge penalty term. See details be
...
further arguments.

Value

  • An object of the class penalty. This is a list with elements
  • penaltycharacter: the penalty name.
  • lambdadouble: the (nonnegative) regularization parameter.
  • getpenmatfunction: computes the diagonal penalty matrix.

Details

The basic idea of the AO penalty is to use a linear combination of $L_1$-norm and the bridge penalty with $\gamma > 1$ where the amount of the bridge penalty part is driven by empirical correlation. So, consider the penalty Pλ~ao(β)=i=2pj<ipλ~,ij(β),λ~=(λ,γ) where pλ~,ij=λ[(1|ϱij|)(|βi|+|βj|)+|ϱij|(|βi|γ+|βj|γ)], and $\varrho_{ij}$ denotes the value of the (empirical) correlation of the i-th and j-th regressor. Since we are going to approximate an octagonal polytope in two dimensions, we will refer to this penalty as approximated octagon (AO) penalty. Note that $P_{\tilde{\lambda}}^{ao}(\boldsymbol{\beta})$ leads to a dominating lasso term if the regressors are uncorrelated and to a dominating bridge term if they are nearly perfectly correlated.

The penalty can be rearranged as Pλ~ao(β)=i=1ppλ~,iao(βi), where Missing or unrecognized delimiter for \left It uses two tuning parameters $\tilde{\lambda} = (\lambda, \gamma)$, where $\lambda$ controls the penalty amount and $\gamma$ manages the approximation of the pairwise $L_\infty$-norm.

References

Ulbricht, Jan (2010) Variable Selection in Generalized Linear Models. Ph.D. Thesis. LMU Munich.

See Also

penalty, genet