The kest
function computes different biasing parameters available in the literature proposed by different researchers.
kest(object, …)
# S3 method for lmridge
kest(object, …)
# S3 method for klmridge
print(x, …)
An object of class "lmridge" for the kest
.
An object of class "klmridge" for the print.kest.klmridge
.
Not presently used in this implementation.
The function returns the list of following biasing parameter methods proposed by various researchers.
By Thisted (1976), \(\frac{((p-2)*\hat{\sigma}^2)}{\sum(\beta^2)}\)
As in lm.ridge
of MASS
\(\frac{((p-2)*\hat{\sigma}^2*n)}{\sum(\hat{y}^2)}\)
By Lawless and Wang (1976), \(\frac{p*\hat{\sigma}^22}{\sum(\lambda_j*\hat{\alpha}_j^2)}\)
Value of Cross Validation (CV) for each biasing parameter \(K\), \(CV_k=\frac{1}{n}\sum_{j=1}^n (y_i-X_j \hat{\beta}_{j_K})^2 \).
Value of biasing parameter at which CV is small.
By Hoerl and Kennard (1970), \(\frac{p*\hat{\sigma}^2}{\hat{\beta}'\hat{\beta}}\)
By Kibria (2003), \(\frac{1}{p}*\sum(\frac{\hat{\sigma}^2}{\hat{\beta}_j^2)}\)
Value of Generalized Cross Validation (GCV) for each biasing parameter \(K\), \(\frac{(y_i-X_j\hat{\beta}_{J_K})^2}{[n-(1+Trace(H_{R,k}))]^2}\).
Value of biasing parameter at which GCV is small.
By Dwividi and Shrivastava, (1978), \(\frac{\hat{\sigma}^2}{\hat{\beta}'\hat{\beta}}\)
By Kibria (2003), \(\frac{\hat{\sigma}^2}{(\prod(\hat{\alpha}_j^2))^(1/p)}\)
By Kibria (2003), \(median(\frac{\hat{\sigma}^2}{\hat{\alpha}_j^2})\)
By Muniz and Kibria (2009), \(max[\frac{1}{\sqrt{\frac{\hat{\sigma}^2}{\hat{\alpha}^2_j}}}]\)
By Muniz and Kibria (2009), \(max[\sqrt{\frac{\hat{\sigma}^2}{\hat{\alpha}_j^2}}]\)
By Muniz and Kibria (2009), \([\prod\frac{1}{\sqrt{\frac{\hat{\sigma}^2}{\hat{\alpha}_j^2}}}]^\frac{1}{p}\)
By Muniz and Kibria (2009), \([\prod \sqrt{\frac{\hat{\sigma}^2}{\hat{\alpha}_j^2}}]^{\frac{1}{p}}\)
By Muniz and Kibria (2009), \(Median[\frac{1}{\sqrt{\frac{\hat{\sigma}^2}{\hat{\alpha}^2_j}}}]\)
By Muniz et al. (2012), \(max(\frac{1}{\sqrt{\frac{\lambda_{max} \hat{\sigma}^2} {(n-p)\hat{\sigma}^2+\lambda_{max}\hat{\alpha}^2_j}}})\)
By Muniz et al. (2012), \(max[\sqrt{\frac{\lambda_{max}\hat{\sigma}^2}{(n-p)\hat{\sigma}^2}+\lambda_{max}\hat{\alpha}^2_j}]\)
By Muniz et al. (2012), \([\prod(\frac{1}{\sqrt{\frac{\lambda_{max}\hat{\sigma}^2}{(n-p)\hat{\sigma}^2+\lambda_{max}\hat{\alpha}^2_j}}})]^{\frac{1}{p}}\)
By Muniz et al. (2012), \([\prod(\sqrt{\frac{\lambda_{max}\hat{\sigma}^2}{(n-p) \hat{\sigma}^2+\lambda_{max}\hat{\alpha}^2_j}})^{\frac{1}{p}}\)
By Muniz et al., \(Median[\frac{1}{\sqrt{\frac{\lambda_{max}\hat{\sigma}^2}{(n-p)\hat{\sigma}^2+\lambda_{max}\hat{\alpha}^2_j}}}]\)
By Dorugade and Kashid (2012), \(0, \frac{p\hat{\sigma}^2}{\hat{\alpha}'\hat{\alpha}}-\frac{1}{n(VIF_j)_{max}}\)
By Dorugade and Kashid (2012), \(HM[\frac{2p}{\lambda_{max}} \sum(\frac{\hat{\sigma}^2}{\hat{\alpha}^2_j})]\)
The OLS estimator in canonical form, i.e., \(\hat{\alpha}=(P'X'XP)^{-1}X'^*y\), where \(X^*=XP\) \(P\) is eigenvector of \(X'X\).
The kest
function computes different biasing parameter for the ordinary linear ridge regression. All these methods are already available in the literature and proposed by various authors. See reference section.
Dorugade, A. and Kashid, D. (2010). Alternative Method for Choosing Ridge Parameter for Regression. Applied Mathematical Sciences, 4(9), 447-456.
Dorugade, A. (2014). New Ridge Parameters for Ridge Regression. Journal of the Association of Arab Universities for Basic and Applied Sciences, 15, 94-99. Dorugade, 2014.
Hoerl, A. E., Kennard, R. W., and Baldwin, K. F. (1975). Ridge Regression: Some Simulation. Communication in Statistics, 4, 105-123. Hoer et al., 1975.
Hoerl, A. E. and Kennard, R. W., (1970). Ridge Regression: Biased Estimation of Nonorthogonal Problems. Technometrics, 12, 55-67. Hoerl and Kennard, 1970.
Imdad, M. U. Addressing Linear Regression Models with Correlated Regressors: Some Package Development in R (Doctoral Thesis, Department of Statistics, Bahauddin Zakariya University, Multan, Pakistan), 2017.
Kibria, B. (2003). Performance of Some New Ridge Regression Estimators. Communications in Statistics-Simulation and Computation, 32(2), 491-435. Kibria, 2003.
Lawless, J., and Wang, P. (1976). A Simulation Study of Ridge and Other Regression Estimators. Communications in Statistics-Theory and Methods, 5(4), 307-323. Lawless and Wang, 1976.
Muniz, G., and Kibria, B. (2009). On Some Ridge Regression Estimators: An Empirical Comparisons. Communications in Statistics-Simulation and Computation, 38(3), 621-630. Muniz and Kibria, 2009.
Muniz, G., Kibria, B., Mansson, K., and Shukur, G. (2012). On developing Ridge Regression Parameters: A Graphical Investigation. SORT-Statistics and Operations Research Transactions, 36(2), 115--138.
Thisted, R. A. (1976). Ridge Regression, Minimax Estimation and Empirical Bayes Methods. Technical Report 28, Division of Biostatistics, Stanford University, California.
Venables, W. N. and Ripley, B. D. (2002). Modern Applied Statistics with S. Springer New York, 4th edition, ISBN 0-387-95457-0.
# NOT RUN {
mod <- lmridge(y~., as.data.frame(Hald), K = seq(0, 0.2, 0.001))
kest(mod)
## GCV values
kest(mod)$GCV
## minimum GCV value
kest(mod)$kGCV
## Hoerl and Kennard (1970)
kest(mod)$HKB
# }
Run the code above in your browser using DataLab