hmm.discnp (version 0.2-4)

mps: Most probable states.

Description

Calculates the most probable hidden state underlying each observation.

Usage

mps(y, object = NULL, tpm, Rho, ispd=NULL)

Arguments

y

The observations for which the underlying most probable hidden states are required. May be a sequence of observations, or a list each component of which constitutes a (replicate) sequence of observations. If y is missing it is set equal to the y component of object, given that that object and that component exist. Otherwise an error is given.

object

An object describing a fitted hidden Markov model, as returned by hmm(). In order to make any kind of sense, object should bear some reasonable relationship to y.

tpm

The transition probability matrix for a hidden Markov model; ignored if object is non-null. Should bear some reasonable relationship to y.

Rho

A matrix specifying the probability distributions of the observations for a hidden Markov model; ignored if object is non-null. Should bear some reasonable relationship to y.

ispd

A vector specifying the initial state probability distribution for a hidden Markov model, or a matrix each of whose columns are trivial (“delta function”) vectors specifying the “most probable” initial state for each observation sequence.

This argument is ignored if object is non-null. It should bear some reasonable relationship to y. If both ispd and object are NULL then ispd is taken to be the stationary distribution of the chain, calculated from tpm.

Value

If y is a single observation sequence, then the value is a vector of corresponding most probable states.

If y is a list of replicate sequences, then the value is a list, the \(j\)-th entry of which constitutes the vector of most probable states underlying the \(j\)-th replicate sequence.

Warning

The sequence of most probable states as calculated by this function will not in general be the most probable sequence of states. It may not even be a possible sequence of states. This function looks at the state probabilities separately for each time \(t\), and not at the states in their sequential context.

To obtain the most probable sequence of states use viterbi().

Details

For each \(t\) the maximum value of \(\gamma_t(i)\), i.e. of the (estimated) probability that the state at time \(t\) is equal to \(i\), is calculated, and the corresponding index returned. These indices are interpreted as the values of the (most probable) states. I.e. the states are assumed to be 1, 2, …, \(K\), for some \(K\).

References

Rabiner, L. R., "A tutorial on hidden Markov models and selected applications in speech recognition," Proc. IEEE vol. 77, pp. 257 -- 286, 1989.

See Also

hmm(), sim.hmm(), viterbi()

Examples

Run this code
# NOT RUN {
# See the help for sim.hmm() for how to generate y.num.
# }
# NOT RUN {
fit.num <- hmm(y.num,K=2,verb=TRUE)
s.1 <- mps(y.num,fit.num)
s.2 <- mps(y.num,tpm=P,ispd=c(0.25,0.75),Rho=R) # P and R as in the help
                                                  # for sim.hmm().
# The order of the states has gotten swapped; 3-s.1[,1] is much
# more similar to s.2[,1] than is s.1[,1].
# }

Run the code above in your browser using DataCamp Workspace