normalmixEM (x, lambda = NULL, mu = NULL, sigma = NULL, k = 2,
mean.constr = NULL, sd.constr = NULL,
epsilon = 1e-08, maxit = 1000, maxrestarts=20,
verb = FALSE, fast=FALSE, ECM = FALSE,
arbmean = TRUE, arbvar = TRUE)k, then normalized to sum to 1.
If NULL, then lambda is random from a uniform Dirichlet
distributiarbmean is set to FALSE. If non-NULL and a vector,
k is set to length(mu). If NULL, then the initial value
is randomly garbvar is set to FALSE. If non-NULL and a vector,
arbvar is set to TRUE and k imu
and sigma are both NULL.k. Each vector entry helps specify the constraints,
if any, on the corresponding mean parameter: If NA, the corresponding
parameter is unconstraimean.constr.normalmixEM2comp, which is a much faster version of the EM
algorithm for this case.
This version is less protected against certain kinds of umus. If FALSE, then
a scale mixture will be fit. Initial value ignored unless mu is NULL.sigmas. If FALSE, then
a location mixture will be fit. Initial value ignored unless sigma is NULL.normalmixEM returns a list of class mixEM with items:arbmean = FALSE, then only the smallest standard
deviation is returned. See scale below.arbmean = FALSE, then the scale factor for the component standard deviations is returned.
Otherwise, this is omitted from the output.ECM argument)
that alternates between maximizing with respect to the mu
and lambda while holding sigma fixed, and maximizing with
respect to sigma and lambda while holding mu
fixed. In the case where arbmean is FALSE
and arbvar is TRUE, there is no closed-form EM algorithm,
so the ECM option is forced in this case.mvnormalmixEM, normalmixEM2comp,
normalmixMMlc, spEMsymloc##Analyzing the Old Faithful geyser data with a 2-component mixture of normals.
data(faithful)
attach(faithful)
set.seed(100)
system.time(out<-normalmixEM(waiting, arbvar = FALSE, epsilon = 1e-03))
out
system.time(out2<-normalmixEM(waiting, arbvar = FALSE, epsilon = 1e-03, fast=TRUE))
out2 # same thing but much fasterRun the code above in your browser using DataLab