emMixHMM implements the maximum-likelihood parameter estimation of a mixture of HMM models by the Expectation-Maximization (EM) algorithm, known as Baum-Welch algorithm in the context of mixHMM.
emMixHMM(Y, K, R, variance_type = c("heteroskedastic", "homoskedastic"),
order_constraint = TRUE, init_kmeans = TRUE, n_tries = 1,
max_iter = 1000, threshold = 1e-06, verbose = FALSE)
Matrix of size \((n, m)\) representing the observed
responses/outputs. Y
consists of n functions of X
observed at
points \(1,\dots,m\).
The number of clusters (Number of HMM models).
The number of regimes (HMM components) for each cluster.
Optional character indicating if the model is "homoskedastic" or "heteroskedastic". By default the model is "heteroskedastic".
Optional. A logical indicating whether or not a mask
of order one should be applied to the transition matrix of the Markov chain
to provide ordered states. For the purpose of segmentation, it must be set
to TRUE
(which is the default value).
Optional. A logical indicating whether or not the curve partition should be initialized by the K-means algorithm. Otherwise the curve partition is initialized randomly.
Optional. Number of runs of the EM algorithm. The solution providing the highest log-likelihood will be returned.
If n_tries
> 1, then for the first run, parameters are initialized by
uniformly segmenting the data into K segments, and for the next runs,
parameters are initialized by randomly segmenting the data into K
contiguous segments.
Optional. The maximum number of iterations for the EM algorithm.
Optional. A numeric value specifying the threshold for the relative difference of log-likelihood between two steps of the EM as stopping criteria.
Optional. A logical value indicating whether or not values of the log-likelihood should be printed during EM iterations.
EM returns an object of class ModelMixHMM.
emMixHMM function implements the EM algorithm. This function starts
with an initialization of the parameters done by the method initParam
of
the class ParamMixHMM, then it alternates between the E-Step
(method of the class StatMixHMM) and the M-Step (method of
the class ParamMixHMM) until convergence (until the relative
variation of log-likelihood between two steps of the EM algorithm is less
than the threshold
parameter).
# NOT RUN {
data(toydataset)
Y <- t(toydataset[,2:ncol(toydataset)])
mixhmm <- emMixHMM(Y = Y, K = 3, R = 3, verbose = TRUE)
mixhmm$summary()
mixhmm$plot()
# }
Run the code above in your browser using DataLab