HDclassif (version 2.1.0)

hdda: High Dimensional Discriminant Analysis

Description

HDDA is a model-based discriminant analysis method assuming each class of the dataset live in a proper Gaussian subspace which is much smaller than the original one, the hdda.learn function calculates the parameters of each subspace in order to predict the class of new observation of this kind.

Usage

hdda(data, cls, model = "AkjBkQkDk", graph = FALSE, d_select = "Cattell",
  threshold = 0.2, com_dim = NULL, show = TRUE, scaling = FALSE,
  cv.dim = 1:10, cv.threshold = c(0.001, 0.005, 0.05, 1:9 * 0.1),
  cv.vfold = 10, LOO = FALSE, noise.ctrl = 1e-08, d)

Arguments

data

A matrix or a data frame of observations, assuming the rows are the observations and the columns the variables. Note that NAs are not allowed.

cls

The vector of the class of each observations, its type can be numeric or string.

model

A character string vector, or an integer vector indicating the models to be used. The available models are: "AkjBkQkDk" (default), "AkBkQkDk", "ABkQkDk", "AkjBQkDk", "AkBQkDk", "ABQkDk", "AkjBkQkD", "AkBkQkD", "ABkQkD", "AkjBQkD", "AkBQkD", "ABQkD", "AjBQD", "ABQD". It is not case sensitive and integers can be used instead of names, see details for more information. Several models can be used, if it is, only the results of the one which maximizes the BIC criterion is kept. To run all models, use model="ALL".

graph

It is for comparison sake only, when several estimations are run at the same time (either when using several models, or when using cross-validation to select the best dimension/threshold). If graph = TRUE, the plot of the results of all estimations is displayed. Default is FALSE.

d_select

Either “Cattell” (default), “BIC” or “CV”. See details for more information. This parameter selects which method to use to select the intrinsic dimensions.

threshold

A float stricly within 0 and 1. It is the threshold used in the Cattell's Scree-Test.

com_dim

It is used only for common dimensions models. The user can give the common dimension s/he wants. If used, it must be an integer. Its default is set to NULL.

show

Use show = FALSE to settle off the informations that may be printed. Default is TRUE.

scaling

Logical: whether to scale the dataset (mean=0 and standard-error=1 for each variable) or not. By default the data is not scaled.

cv.dim

A vector of integers. Only when d_select=“CV”. Gives the dimensions for which the CV is to be done. Note that if some dimensions are greater than what it is possible to have, those are taken off.

cv.threshold

A vector of floats strictly within 0 and 1. Only when d_select=“CV”. Gives the thresholds for which the CV is to be done.

cv.vfold

An integer. Only when d_select=“CV”. It gives the number of different subsamples in which the dataset is split. If “cv.vfold” is greater than the number of observations, then the program equalize them.

LOO

If TRUE, it returns the results (classes and posterior probabilities) for leave-one-out cross-validation.

noise.ctrl

This parameter avoids to have a too low value of the 'noise' parameter b. It garantees that the dimension selection process do not select too many dimensions (which leads to a potential too low value of the noise parameter b). When selecting the intrinsic dimensions using Cattell's scree-test or BIC, the function doesn't use the eigenvalues inferior to noise.ctrl, so that the intrinsic dimensions selected can't be higher or equal to the order of these eigenvalues.

d

DEPRECATED. This parameter is kept for retro compatibility. Now please use the parameter d_select.

Value

hdda returns an 'hdc' object; it's a list containing:

model

The name of the model.

k

The number of classes.

d

The dimensions of each class.

a

The parameters of each class subspace.

b

The noise of each class subspace.

mu

The mean of each variable for each class.

prop

The proportion of each class.

ev

The eigen values of the var/covar matrix.

Q

The orthogonal matrix of orientation of each class.

kname

The name of each class.

BIC

The BIC value of the model used.

scaling

The centers and the standard deviation of the original dataset.

Details

Some information on the signification of the model names:

Akj are the parameters of the classes subspaces:

  • if Akj: each class has its parameters and there is one parameter for each dimension

  • if Ak: the classes have different parameters but there is only one per class

  • if Aj: all the classes have the same parameters for each dimension (it's a particular case with a common orientation matrix)

  • if A: all classes have the same one parameter

Bk are the noises of the classes subspaces:

  • If Bk: each class has its proper noise

  • if B: all classes have the same noise

Qk is the orientation matrix of each class:

  • if Qk: all classes have its proper orientation matrix

  • if Q: all classes have the same orientation matrix

Dk is the intrinsic dimension of each class:

  • if Dk: the dimensions are free and proper to each class

  • if D: the dimension is common to all classes

The model “all” will compute all the models, give their BIC and keep the model with the highest BIC value. Instead of writing the model names, they can also be specified using an integer. 1 represents the most general model (“AkjBkQkDk”) while 14 is the most constrained (“ABQD”), the others number/name matching are given below. Note also that several models can be run at once, by using a vector of models (e.g. model = c("AKBKQKD","AKJBQKDK","AJBQD") is equivalent to model = c(8,4,13); to run the 6 first models, use model=1:6). If all the models are to be run, model="all" is faster than model=1:14.

AkjBkQkDk 1 AkjBkQkD 7
AkBkQkDk 2 AkBkQkD 8
ABkQkDk 3 ABkQkD 9
AkjBQkDk 4 AkjBQkD 10
AkBQkDk 5 AkBQkD 11
ABQkDk 6 ABQkD 12

The parameter d, is used to select the intrinsic dimensions of the subclasses. Here are his definictions:

  • “Cattell”: The Cattell's scree-test is used to gather the intrinsic dimension of each class. If the model is of common dimension (models 7 to 14), the scree-test is done on the covariance matrix of the whole dataset.

  • “BIC”: The intrinsic dimensions are selected with the BIC criterion. See Bouveyron et al. (2010) for a discussion of this topic. For common dimension models, the procedure is done on the covariance matrix of the whole dataset.

  • “CV”: A V-fold cross-validation (CV) can be done in order to select the best threshold (for all models) or the best common dimensions (models 7 to 14). The V-fold cross-validation is done for each dimension (respectively threshold) in the argument “cv.dim” (resp. “cv.threshold”), then the dimension (resp. threshold) that gives the best good classification rate is kept. The dataset is split in “cv.vfold” (default is 10) random subsamples, then CV is done for each sample: each of them is used as validation data while the remaining data is used as training data. For sure, if “cv.vfold” equals the number of observations, then this CV is equivalent to a leave-one-out.

References

Bouveyron, C. Girard, S. and Schmid, C. (2007) “High Dimensional Discriminant Analysis”, Communications in Statistics: Theory and Methods, vol. 36 (14), pp. 2607--2623

Bouveyron, C. Celeux, G. and Girard, S. (2010) “Intrinsic dimension estimation by maximum likelihood in probabilistic PCA”, Technical Report 440372, Universite Paris 1 Pantheon-Sorbonne

Berge, L. Bouveyron, C. and Girard, S. (2012) “HDclassif: An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data”, Journal of Statistical Software, 46(6), 1--29, url: http://www.jstatsoft.org/v46/i06/

See Also

hddc, predict.hdc, plot.hdc

Examples

Run this code
# NOT RUN {
# Example 1:
data<-simuldata(1000, 1000, 50, K=5)
X <- data$X
clx <- data$clx
Y <- data$Y
cly <- data$cly
# we get the HDDA parameters:
prms1 <- hdda(X, clx)         

cl1 <- predict(prms1, Y, cly)
# the class vector of Y estimated with HDDA:
cl1$class                     

# another model is used:
prms1 <- hdda(X, clx, model=12)
#model=12 is equivalent to model="ABQkD"     
cl1 <- predict(prms1, Y, cly) 

# Example 2:
data(wine)
a <- wine[,-1]
z <- wine[,1]
prms2 <- hdda(a, z, model='all', scaling=TRUE, d_select="bic", graph=TRUE)
cl2 <- predict(prms2, a, z)

# getting the best dimension
# using a common dimension model
# we do LOO-CV using cv.vfold=nrow(a)
prms3 <- hdda(a, z, model="akjbkqkd", d_select="CV", cv.vfold=nrow(a), scaling=TRUE, graph=TRUE)

cl3 <- predict(prms3, a, z)

# Example 3:
# Validation with LOO
prms4 = hdda(a, z, LOO=TRUE, scaling=TRUE)
sum(prms4$class==z) / length(z)

# }

Run the code above in your browser using DataCamp Workspace