hmm.discnp (version 0.1-7)

mps: Most probable states.

Description

Calculates the most probable hidden state underlying each observation.

Usage

mps(y, object = NULL, tpm, Rho, ispd=NULL)

Arguments

y
The observations for which the underlying most probable hidden states are required. May be a sequence of observations, or a list each component of which constitutes a (replicate) sequence of observations. If y is missing it is set e
object
An object describing a fitted hidden Markov model, as returned by hmm(). In order to make any kind of sense, object should bear some reasonable relationship to y.
tpm
The transition probability matrix for a hidden Markov model; ignored if object is non-null. Should bear some reasonable relationship to y.
Rho
A matrix specifying the probability distributions of the observations for a hidden Markov model; ignored if object is non-null. Should bear some reasonable relationship to y.
ispd
A vector specifying the initial state probability distribution for a hidden Markov model; ignored if object is non-null. Should bear some reasonable relationship to y. If both ispd and object are <

Value

  • If y is a single observation sequence, then the value is a vector of corresponding most probable states.

    If y is a list of replicate sequences, then the value is a list, the $j$-th entry of which constitutes the vector of most probable states underlying the $j$-th replicate sequence.

Warning

The sequence of most probable states as calculated by this function will not in general be the most probable sequence of states. It may not even be a possible sequence of states. This function looks at the state probabilities separately for each time $t$, and not at the states in their sequential context.

To obtain the most probable sequence of states use viterbi().

Details

For each $t$ the maximum value of $\gamma_t(i)$, i.e. of the (estimated) probability that the state at time $t$ is equal to $i$, is calculated, and the corresponding index returned. These indices are interpreted as the values of the (most probable) states. I.e. the states are assumed to be 1, 2, ..., $K$, for some $K$.

References

Rabiner, L. R., "A tutorial on hidden Markov models and selected applications in speech recognition," Proc. IEEE vol. 77, pp. 257 -- 286, 1989.

See Also

hmm(), sim.hmm(), viterbi()

Examples

Run this code
# See the help for sim.hmm() for how to generate y.num.
fit.num <- hmm(y.num,K=2,verb=TRUE)
s.1 <- mps(y.num,fit.num)
s.2 <- mps(y.num,tpm=P,ispd=c(0.25,0.75),Rho=R) # P and R as in the help
                                                  # for sim.hmm().
# The order of the states has gotten swapped; 3-s.1[,1] is much
# more similar to s.2[,1] than is s.1[,1].

Run the code above in your browser using DataLab