HMM (version 1.0)

viterbiTraining: Inferring the parameters of a Hidden Markov Model via Viterbi-training

Description

For an initial Hidden Markov Model (HMM) and a given sequence of observations, the Viterbi-training algorithm infers optimal parameters to the HMM. Viterbi-training usually converges much faster than the Baum-Welch algorithm, but the underlying algorithm is theoretically less justified. Be careful: The algorithm converges to a local solution which might not be the optimum.

Usage

viterbiTraining(hmm, observation, maxIterations=100, delta=1E-9, pseudoCount=0)

Arguments

hmm

A Hidden Markov Model.

observation

A sequence of observations.

maxIterations

The maximum number of iterations in the Viterbi-training algorithm.

delta

Additional termination condition, if the transition and emission matrices converge, before reaching the maximum number of iterations (maxIterations). The difference of transition and emission parameters in consecutive iterations must be smaller than delta to terminate the algorithm.

pseudoCount

Adding this amount of pseudo counts in the estimation-step of the Viterbi-training algorithm.

Value

Return Values:

hmm

The inferred HMM. The representation is equivalent to the representation in initHMM.

difference

Vector of differences calculated from consecutive transition and emission matrices in each iteration of the Viterbi-training. The difference is the sum of the distances between consecutive transition and emission matrices in the L2-Norm.

Format

Dimension and Format of the Arguments.

hmm

A valid Hidden Markov Model, for example instantiated by initHMM.

observation

A vector of observations.

References

For details see: Lawrence R. Rabiner: A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE 77(2) p.257-286, 1989.

See Also

See baumWelch.

Examples

Run this code
# NOT RUN {
# Initial HMM
hmm = initHMM(c("A","B"),c("L","R"),
	transProbs=matrix(c(.9,.1,.1,.9),2),
	emissionProbs=matrix(c(.5,.51,.5,.49),2))
print(hmm)
# Sequence of observation
a = sample(c(rep("L",100),rep("R",300)))
b = sample(c(rep("L",300),rep("R",100)))
observation = c(a,b)
# Viterbi-training
vt = viterbiTraining(hmm,observation,10)
print(vt$hmm)
# }

Run the code above in your browser using DataCamp Workspace