hda(x, ...)
"hda"(x, grouping, newdim = 1:(ncol(x)-1), crule = FALSE, reg.lamb = NULL, reg.gamm = NULL, initial.loadings = NULL, sig.levs = c(0.05,0.05), noutit = 7, ninit = 10, verbose = TRUE, ...)
"hda"(formula, data = NULL, ...)
grouping ~ x1 + x2 + ...
That is, the response is the grouping factor and the right hand side specifies the (non-factor) discriminators.naiveBayes
classification rule should be computed. Requires package e1071
.reg.lambd
: parameter for shrinkage towards diagonal covariance matrices of equal variance in all variables where 0 means diagonality. Default is no shrinkage.ncol(x)
Default is the identity matrix. By specification of initial.loadings = "random"
a random orthonormal matrix will be generated using qr.Q(qr())
of a random matrix with uniformly distributed elements.hda.formula
: Further arguments passed to function hda.default
such as newdim
. For hda.default
: currently not used.newdim
dimensions.hda.scores
data. Identical to input grouping.naiveBayes
trained on input data in the reduced space for classification
of new (transformed) data. Its computation must be specified by input the parameter crule
.comp.acc
and the resulting accuracy that results if single variable loadings are set to 0. The first element describes overall accuracy lift where the second element is an array of dimension (number of classes, number of components in reduced space, number of variables) specifying the lifts for recognition each class separately.manova
based on Wilk's lambda.newdim
).newdim
dimensional subspace and have equal distributions in the remaining dimensions
(see Kumar and Andreou, 1998). The scores are uncorrelated for all classes. The algorithm is implemented as it is proposed by
Burget (2006). Regularization is computed as proposed by Friedman et al. (1989) and Szepannek et al. (2009).Fahrmeir, L. and Hamerle, A. (1984): Multivariate statistische Verfahren. de Gruyter, Berlin.
Friedman, J. (1989): Regularized discriminant analysis. JASA 84, 165-175.
Kumar, N. and Andreou, A. (1998): Heteroscedastic discriminant analysis and reduced rank HMMs for improved speech recognition. Speech Communication 25, pp.283-297.
Szepannek G., Harczos, T., Klefenz, F. and Weihs, C. (2009): Extending features for automatic speech recognition by means of auditory modelling. In: Proceedings of European Signal Processing Conference (EUSIPCO) 2009, Glasgow, pp.1235-1239.
predict.hda
, showloadings
, plot.hda
library(mvtnorm)
library(MASS)
# simulate data for two classes
n <- 50
meana <- meanb <- c(0,0,0,0,0)
cova <- diag(5)
cova[1,1] <- 0.2
for(i in 3:4){
for(j in (i+1):5){
cova[i,j] <- cova[j,i] <- 0.75^(j-i)}
}
covb <- cova
diag(covb)[1:2] <- c(1,0.2)
xa <- rmvnorm(n, meana, cova)
xb <- rmvnorm(n, meanb, covb)
x <- rbind(xa, xb)
classes <- as.factor(c(rep(1,n), rep(2,n)))
# rotate simulated data
symmat <- matrix(runif(5^2),5)
symmat <- symmat + t(symmat)
even <- eigen(symmat)$vectors
rotatedspace <- x %*% even
plot(as.data.frame(rotatedspace), col = classes)
# apply linear discriminant analysis and plot data on (single) discriminant axis
lda.res <- lda(rotatedspace, classes)
plot(rotatedspace %*% lda.res$scaling, col = classes,
ylab = "discriminant axis", xlab = "Observation index")
# apply heteroscedastic discriminant analysis and plot data in discriminant space
hda.res <- hda(rotatedspace, classes)
plot(hda.res$hda.scores, col = classes)
# compare with principal component analysis
pca.res <- prcomp(as.data.frame(rotatedspace), retx = TRUE)
plot(as.data.frame(pca.res$x), col=classes)
# Automatically build classification rule
# this requires package e1071
hda.res2 <- hda(rotatedspace, classes, crule = TRUE)
Run the code above in your browser using DataLab