mgcv (version 0.3-1)

mgcv: Multiple Smoothing Parameter Estimation by GCV or UBRE

Description

Function to efficiently estimate smoothing parameters in Generalized Ridge Regression Problem with multiple (quadratic) penalties, by GCV or UBRE. The function uses Newton's method in multi-dimensions.

Usage

mgcv(M)

Arguments

M$y
The response data vector
M$X
The design matrix for the problem, note that ncol(M$X) must give the number of model parameters, while nrow(M$X) should give the number of data.
M$Z
Matrix containing the null space of the linear equality constraints on the problem, stored as Householder rotations as produced by routine QT(). Number of rows of M$Z must be number of constraints.
M$S
A 3 dimensional array. M$S[i] is a 2 dimensional array containing the elements of the ith penalty matrix. Note that to economize on storage space, rows and columns containing only zeroes need not be st
M$off
Offset values locating the elements of M$S[] in the correct location within each penalty coefficient matrix.
M$fix
M$fix[i] is FALSE if the corresponding smoothing parameter is to be estimated, or TRUE if it is to be fixed at zero.
M$df
M$df[i] gives the number of rows and columns of M$S[i] that are non-zero.
M$sig2
This serves 2 purposes. If it is strictly positive then UBRE is used with sig2 taken as the known error variance. If it is non-positive then GCV is used (and an estimate of the error variance returned).
M$sp
An array of initial smoothing parameter estimates if any of these is not strictly positive then automatic generation of initial values is used.

Value

  • The function returns a list of the same form as the input list with the following elements added/modified:
  • M$pThe best fit parameters given the estimated smoothing parameters.
  • M$spThe estimated smoothing parameters ($\lambda_i/\rho$)
  • M$sig2The estimated (GCV) or supplied (UBRE) error variance.

WARNING

The method may not behave well with near column rank defficient ${\bf X}$ especially in contexts where the weights vary wildly.

Details

This is documentation for the code implementing the method described in section 4 of Wood (2000) . The method is a computationally efficient means of applying GCV to the problem of smoothing parameter selection in generalized ridge regression problems of the form: $$minimise~ \| { \bf W}^{1/2} ({ \bf Xp - y} ) \|^2 \rho + \sum_{i=1}^m \theta_i {\bf p^\prime S}_i{\bf p}$$ possibly subject to constraints ${\bf Cp}={\bf 0}$. ${\bf X}$ is a design matrix, $\bf p$ a parameter vector, $\bf y$ a data vector, $\bf W$ a diagonal weight matrix, ${\bf S}_i$ a positive semi-definite matrix of coefficients defining the ith penalty and $\bf C$ a matrix of coefficients defining any linear equality constraints on the problem. The smoothing parameters are the $\lambda_i$ but there is an overall smoothing parameter $\rho$ as well. Note that ${\bf X}$ must be of full column rank, at least when projected into the null space of any equality constraints.

The method operates by alternating very efficient direct searches for $\rho$ with Newton updates of the logs of the $\lambda_i$

The method is coded in C and is intended to be portable. It should be noted that seriously ill conditioned problems (i.e. with close to column rank deficiency in the design matrix) may cause problems, especially if weights vary wildly between observations.

References

Gu and Wahba (1991) Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method. SIAM J. Sci. Statist. Comput. 12:383-398

Wood (2000) Modelling and Smoothing Parameter Estimation with Multiple Quadratic Penalties. JRSSB 62(2):413-428

http://www.ruwpa.st-and.ac.uk/simon.html

See Also

GAMsetup gam

Examples

Run this code
# This example modified from routine SANtest()
    
    n<-100 # number of observations to simulate
    x <- runif(4 * n, 0, 1) # simulate covariates
    x <- array(x, dim = c(4, n)) # put into array for passing to GAMsetup
    pi <- asin(1) * 2  # begin simulating some data
    y <- 2 * sin(pi * x[1, ])
    y <- y + exp(2 * x[2, ]) - 3.75887
    y <- y + 0.2 * x[3, ]^11 * (10 * (1 - x[3, ]))^6 + 10 * (10 *
        x[3, ])^3 * (1 - x[3, ])^10 - 1.396
    sig2<- -1    # set magnitude of variance
    e <- rnorm(n, 0, sqrt(abs(sig2)))
    y <- y + e          # simulated data
    w <- matrix(1, n, 1) # weight matrix
    par(mfrow = c(2, 2)) # scatter plots of simulated data   
    plot(x[1, ], y)
    plot(x[2, ], y)
    plot(x[3, ], y)
    plot(x[4, ], y)
    G <- list(m = 4, n = n, nsdf = 0, df = c(15, 15, 15, 15),
        x = x) # creat list for passing to GAMsetup
    H <- GAMsetup(G)
    H$y <- y    # add data to H
    H$sig2 <- sig2  # add variance (signalling GCV use in this case) to H
	H$sp <-array(-1,H$m)
	H$fix<-array(FALSE,H$m)
    H$w <- w       # add weights to H
    H <- mgcv(H)  # select smoothing parameters and fit model

Run the code above in your browser using DataLab