Learn R Programming

mclust (version 5.2.2)

MclustDA: MclustDA discriminant analysis

Description

Discriminant analysis based on Gaussian finite mixture modeling.

Usage

MclustDA(data, class, G = NULL, modelNames = NULL, modelType = c("MclustDA", "EDDA"), prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), ...)

Arguments

data
A data frame or matrix giving the training data.
class
A vector giving the class labels for the observations in the training data.
G
An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated within each class. The default is G = 1:5. A different set of mixture components for each class can be specified by providing this argument with a list of integers for each class. See the examples below.
modelNames
A vector of character strings indicating the models to be fitted by EM within each class (see the description in mclustModelNames). A different set of mixture models for each class can be specified by providing this argument with a list of character strings. See the examples below.
modelType
A character string specifying whether the models given in modelNames should fit a different number of mixture components and covariance structures for each class ("MclustDA", the default) or should be constrained to have a single component for each class with the same covariance structure among classes ("EDDA"). See Details section and the examples below.
prior
The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function priorControl.
control
A list of control parameters for EM. The defaults are set by the call emControl().
initialization
A list containing zero or more of the following components:

warn
A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by mclust.options.
...
Further arguments passed to or from other methods.

Value

An object of class 'MclustDA' providing the optimal (according to BIC) mixture model.The details of the output components are as follows:

References

Bensmail, H., and Celeux, G. (1996) Regularized Gaussian Discriminant Analysis Through Eigenvalue Decomposition. Journal of the American Statistical Association, 91, 1743-1748. C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97, 611-631. C. Fraley, A. E. Raftery, T. B. Murphy and L. Scrucca (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington.

Details

The "EDDA" method for discriminant analysis is described in Bensmail and Celeux (1996), while "MclustDA" in Fraley and Raftery (2002).

See Also

summary.MclustDA, plot.MclustDA, predict.MclustDA, classError

Examples

Run this code
odd <- seq(from = 1, to = nrow(iris), by = 2)
even <- odd + 1
X.train <- iris[odd,-5]
Class.train <- iris[odd,5]
X.test <- iris[even,-5]
Class.test <- iris[even,5]

# common EEE covariance structure (which is essentially equivalent to linear discriminant analysis)
irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA", modelNames = "EEE")
summary(irisMclustDA, parameters = TRUE)
summary(irisMclustDA, newdata = X.test, newclass = Class.test)

# common covariance structure selected by BIC
irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA")
summary(irisMclustDA, parameters = TRUE)
summary(irisMclustDA, newdata = X.test, newclass = Class.test)

# general covariance structure selected by BIC
irisMclustDA <- MclustDA(X.train, Class.train)
summary(irisMclustDA, parameters = TRUE)
summary(irisMclustDA, newdata = X.test, newclass = Class.test)

plot(irisMclustDA)
plot(irisMclustDA, dimens = 3:4)
plot(irisMclustDA, dimens = 4)

plot(irisMclustDA, what = "classification")
plot(irisMclustDA, what = "classification", newdata = X.test)
plot(irisMclustDA, what = "classification", dimens = 3:4)
plot(irisMclustDA, what = "classification", newdata = X.test, dimens = 3:4)
plot(irisMclustDA, what = "classification", dimens = 4)
plot(irisMclustDA, what = "classification", dimens = 4, newdata = X.test)

plot(irisMclustDA, what = "train&test", newdata = X.test)
plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 3:4)
plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 4)

plot(irisMclustDA, what = "error")
plot(irisMclustDA, what = "error", dimens = 3:4)
plot(irisMclustDA, what = "error", dimens = 4)
plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test)
plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 3:4)
plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 4)

## Not run: 
# # simulated 1D data
# n <- 250 
# set.seed(1)
# triModal <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5))
# triClass <- c(rep(1,n), rep(2,n), rep(3,n))
# odd <- seq(from = 1, to = length(triModal), by = 2)
# even <- odd + 1
# triMclustDA <- MclustDA(triModal[odd], triClass[odd])
# summary(triMclustDA, parameters = TRUE)
# summary(triMclustDA, newdata = triModal[even], newclass = triClass[even])
# plot(triMclustDA, what = "scatterplot")
# plot(triMclustDA, what = "classification")
# plot(triMclustDA, what = "classification", newdata = triModal[even])
# plot(triMclustDA, what = "train&test", newdata = triModal[even])
# plot(triMclustDA, what = "error")
# plot(triMclustDA, what = "error", newdata = triModal[even], newclass = triClass[even])
# 
# # simulated 2D cross data
# data(cross)
# odd <- seq(from = 1, to = nrow(cross), by = 2)
# even <- odd + 1
# crossMclustDA <- MclustDA(cross[odd,-1], cross[odd,1])
# summary(crossMclustDA, parameters = TRUE)
# summary(crossMclustDA, newdata = cross[even,-1], newclass = cross[even,1])
# plot(crossMclustDA, what = "scatterplot")
# plot(crossMclustDA, what = "classification")
# plot(crossMclustDA, what = "classification", newdata = cross[even,-1])
# plot(crossMclustDA, what = "train&test", newdata = cross[even,-1])
# plot(crossMclustDA, what = "error")
# plot(crossMclustDA, what = "error", newdata =cross[even,-1], newclass = cross[even,1])
# ## End(Not run)

Run the code above in your browser using DataLab