lqa (version 1.0-3)

penalty: Penalty Objects

Description

Penalty objects provide a convenient way to specify the details of the penalty terms used by functions for penalized regression problems as in lqa. See the documentation for lqa for the details on how such model fitting takes place.

Usage

penalty (x, ...)

Arguments

x
the function penalty accesses the penalty objects which are stored within objects created by modelling functions (e.g. lqa).
...
further arguments passed to methods.

Value

  • An object of the class penalty (which has a concise print method). This is a list with elements
  • penaltycharacter: the penalty name.
  • lambdadouble: the (non-negative) tuning parameter.
  • getpenmatfunction: the penalty matrix. Note this element is optional. Either getpenmat or first.derivative (and if necessary a.coefs) must be given.
  • first.derivativefunction: This returns a J dimensional vector of the first derivative of the J penalty terms with respect to $\xi_j$ not(!!!) to $\mathbf{\beta}$.
  • a.coefsa $p \times J$ matrix containing the coefficients of the linear combinations.

item

(ii)

Details

penalty is a generic function with methods for objects of the lqa class. The most crucial issue of penalty objects is to compute $$\mathbf{A}_\lambda = \sum_{j=1}^J \frac{p_{\lambda,j}'(|\mathbf{a}_j^\top \boldsymbol{\beta}|)}{\sqrt{(\mathbf{a}_j^\top \boldsymbol{\beta})^2 + c}} \mathbf{a}_j\mathbf{a}_j^\top,$$ where $c > 0$ is a small real number. This approximated penalty matrix will be used in the fitting procedures lqa.update2, GBlockBoost or ForwardBoost. There are five basic methods for penalty objects: penalty, lambda, getpenmat, first.derivative, a.coefs. The methods penalty and lambda are mandatory. They are necessary to identify the penalty family and, respectively, the tuning parameter vector in the other functions of the lqa-package. But they just appear as list elements in the structure() environment. The function getpenmat() and the functions first.derivative() and a.coefs() are mutually exclusive. Whether we need the first one or the last two depends on the nature of the penalty. Hence we have to distinguish two cases
  1. (i)
{The use of a function getpenmat() is more efficient (in a numerical sense) if
  • $\bullet$
{the penalty matrix $\mathbf{A}_\lambda$ as given above is a diagonal matrix, e.g. if $J = p$ and $\mathbf{a}_j, \:j=1,\ldots,J$ just contains one non-zero element, or} $\bullet${the penalty is quadratic.}} Then the (approximate) penalty matrix $\mathbf{A}_\lambda$ can be computed directly. Most implemented penalties are of those types, e.g. ridge, lasso, scad and penalreg.

Examples

Run this code
penalty <- lasso (lambda = 1.5)
   penalty
   beta <- c (1, -2, 3, -4)
   penalty$first.deriv (beta)

Run the code above in your browser using DataLab