Learn R Programming

rags2ridges (version 1.1)

optPenaltyCV: Select optimal penalty parameter by leave-one-out cross-validation

Description

Function that selects the optimal penalty parameter for the ridgeS call by usage of leave-one-out cross-validation.

Usage

optPenaltyCV(Y, lambdaMin, lambdaMax, step, type = "Alt", target = diag(1/diag(cov(Y))),
targetScale = TRUE, output = "all", graph = TRUE, verbose = TRUE)

Arguments

Y
Data matrix.
lambdaMin
A numeric giving the minimum value for the penalty parameter.
lambdaMax
A numeric giving the maximum value for the penalty parameter.
step
A numeric determining the coarseness in moving through the grid [lambdaMin, lambdaMax].
type
A character indicating the type of ridge estimator to be used. Must be one of: "Alt", "ArchI", "ArchII".
target
A target matrix (in precision terms) for Type I ridge estimators.
targetScale
A logical indicating if the default target is to be made dependent on the leave-one-out sample.
output
A character indicating if the output is either heavy or light. Must be one of: "all", "light".
graph
A logical indicating if the grid search for the optimal penalty parameter should be visualized.
verbose
A logical indicating if intermediate output should be printed on screen.

Value

  • An object of class list:
  • optLambdaA numeric giving the optimal value of the penalty parameter.
  • optPrecA matrix representing the precision matrix of the chosen type (see ridgeS) under the optimal value of the penalty parameter.
  • lambdasA numeric vector representing all values of the penalty parameter for which cross-validation was performed.
  • LLsA numeric vector representing the mean of cross-validated negative log-likelihoods for each value of the penalty parameter given in lambdas.

Details

The function calculates a cross-validated negative log-likelihood score (using a regularized ridge estimator for the precision matrix) for each value of the penalty parameter contained in the search grid by way of leave-one-out cross-validation. The value of the penalty parameter that achieves the lowest cross-validated negative log-likelihood score is deemed optimal. The penalty parameter must be positive such that lambdaMin must be a positive scalar. The maximum allowable value of lambdaMax depends on the type of ridge estimator employed. For details on the type of ridge estimator one may use (one of: "Alt", "ArchI", "ArchII") see ridgeS. In case one employs target shrinkage (Type I ridge estimation: see ridgeS) and one employs the default target (diagonal matrix with inverse sample variances as the entries), one may choose to let the default target be dependent on the complete data sample or to let the target be dependent on the leave-one-out cross-validation sample (meaning targetScale must be TRUE). The ouput consists of an object of class list (see below). When output = "light" only the optLambda and optPrec elements of the list are given.

See Also

ridgeS

Examples

Run this code
## Obtain some (high-dimensional) data
p = 25
n = 10
set.seed(333)
X = matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X)[1:25] = letters[1:25]

## Obtain regularized precision under optimal penalty
OPT <- optPenaltyCV(X, 15, 30, 20, output = "light")
OPT$optLambda	# Optimal penalty
OPT$optPrec	# Regularized precision under optimal penalty

Run the code above in your browser using DataLab