## S3 method for class 'default':
clusplot(x, clus, diss = FALSE, cor = TRUE, stand = FALSE,
lines = 2, shade = FALSE, color = FALSE,
labels= 0, plotchar = TRUE,
col.p = "dark green", col.txt = col.p,
span = TRUE, xlim = NULL, ylim = NULL,
main = paste("CLUSPLOT(", deparse(substitute(x)),")"),
verbose = getOption("verbose"),
...)
diss
argument.In case of a matrix (alike), each row corresponds to an observation, and each column corresponds to a variable. All variables must be
x
. For
each observation the vector lists the number or name of the cluster
to which it has been assigned. clus
is often the clustering
component of the output of x
will be considered as a dissimilarity
matrix or a matrix of observations by variables (see x
arugment above).diss
= FALSE
). If TRUE, then the variables are scaled to unit variance.0, 1, 2
, used to obtain an idea of the
distances between ellipses. The distance between two ellipses E1
and E2 is measured along the line connecting the centers $m1$
and $m2$ of the two ellipses.In case E1 an
clus
are taken as labels for the
clusters. The labels
of the points arplot.default
.par
.lines
is 1 or 2 we optain a k by k matrix (k is the number of
clusters). The element in [i,j]
is the distance between ellipse
i and ellipse j.
If lines = 0
, then the value of this component is NA
.NA
. Let z be the sum of all the elements of y without the NAs.
Then we put shading = y/z *37 + 3 .clusplot
uses the functions princomp
and
cmdscale
. These functions are
data reduction techniques. They will represent the data in a bivariate plot.
Ellipses are then drawn to indicate the clusters. The further layout of the
plot is determined by the optional arguments.Kaufman, L. and Rousseeuw, P.J. (1990). Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York.
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997). Integrating Robust Clustering Techniques in S-PLUS, Computational Statistics and Data Analysis, 26, 17-37.
princomp
, cmdscale
, pam
,
clara
, daisy
, par
,
identify
, cov.mve
,
clusplot.partition
.## plotting votes.diss(dissimilarity) in a bivariate plot and
## partitioning into 2 clusters
data(votes.repub)
votes.diss <- daisy(votes.repub)
votes.clus <- pam(votes.diss, 2, diss = TRUE)$clustering
clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE)clusplot(votes.diss, votes.clus, diss = TRUE, span = FALSE)# simple ellipses
if(interactive()) # uses identify() *interactively* :
clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE,
labels = 1)
## plotting iris (data frame) in a 2-dimensional plot and partitioning
## into 3 clusters.
data(iris)
iris.x <- iris[, 1:4]
clusplot(iris.x, pam(iris.x, 3)$clustering, diss = FALSE,
plotchar = TRUE, color = TRUE)
Run the code above in your browser using DataLab