Learn R Programming

porridge (version 0.3.3)

ridgeGLMmultiT: Multi-targeted ridge estimation of generalized linear models.

Description

Function that evaluates the multi-targeted ridge estimator of the regression parameter of generalized linear models.

Usage

ridgeGLMmultiT(Y, X, U=matrix(ncol=0, nrow=length(Y)), 
               lambdas, targetMat, model="linear", 
               minSuccDiff=10^(-10), maxIter=100)

Value

The ridge estimate of the regression parameter.

Arguments

Y

A numeric being the response vector.

X

The design matrix of the penalized covariates. The number of rows should match the number of elements of Y.

U

The design matrix of the unpenalized covariates. The number of rows should match the number of elements of Y.

lambdas

An all-positive numeric, vector of penalty parameters, one per target.

targetMat

A matrix with targets for the regression parameter as columns.

model

A character, either "linear" and "logistic" (a reference to the models currently implemented), indicating which generalized linear model model instance is to be fitted.

minSuccDiff

A numeric, the minimum distance between the loglikelihoods of two successive iterations to be achieved. Used only if model="logistic".

maxIter

A numeric specifying the maximum number of iterations. Used only if model="logistic".

Author

W.N. van Wieringen.

Details

This function finds the maximizer of the following penalized loglikelihood: \( \mathcal{L}( \mathbf{Y}, \mathbf{X}; \boldsymbol{\beta}) - \frac{1}{2} \sum_{k=1}^K \lambda_k \| \boldsymbol{\beta} - \boldsymbol{\beta}_{k,0} \|_2^2\), with loglikelihood \(\mathcal{L}( \mathbf{Y}, \mathbf{X}; \boldsymbol{\beta})\), response \(\mathbf{Y}\), design matrix \(\mathbf{X}\), regression parameter \(\boldsymbol{\beta}\), penalty parameter \(\lambda\), and the \(k\)-th shrinkage target \(\boldsymbol{\beta}_{k,0}\). For more details, see van Wieringen, Binder (2020).

References

van Wieringen, W.N. Binder, H. (2020), "Online learning of regression models from a sequence of datasets by penalized estimation", submitted.

Examples

Run this code
# set the sample size
n <- 50

# set the true parameter
betas <- (c(0:100) - 50) / 20

# generate covariate data
X <- matrix(rnorm(length(betas)*n), nrow=n)

# sample the response
probs <- exp(tcrossprod(betas, X)[1,]) / (1 + exp(tcrossprod(betas, X)[1,]))
Y     <- numeric()
for (i in 1:n){
    Y <- c(Y, sample(c(0,1), 1, prob=c(1-probs[i], probs[i])))
}

# set the penalty parameter
lambdas <- c(1,3)

# estimate the logistic regression parameter
# bHat <- ridgeGLMmultiT(Y, X, lambdas, model="logistic",
#                       targetMat=cbind(betas/2, rnorm(length(betas))))

Run the code above in your browser using DataLab