Learn R Programming

tileHMM (version 1.0-7)

viterbiEM: Efficient Estimation of HMM Parameters

Description

Uses a combination of Viterbi training and Baum-Welch algorithm to estimate parameters for a hidden Markov model.

Usage

viterbiEM(hmm, data, max.iter = c(5, 15), eps = 0.01, verbose = 0, ...)

Arguments

hmm
Object of class hmm. This is used as starting point for the optimisation procedure.
data
A list of observation sequences.
max.iter
Maximum number of iterations (see Details).
eps
Minimum change in log-likelihood between iterations (see Details).
...
Additional arguments to be passed to viterbiTraining and baumWelch (see Details).
verbose
Level of verbosity. Higher numbers produce more status messages.

Value

An object of class hmm with optimised parameter estimates.

Details

The values of arguments max.iter and eps can have either one or two elements. In the latter case the first element is used for viterbiTraining and the second one for baumWelch. Additional arguments can be passed to viterbiTraining and baumWelch by using arguments of the form viterbi = list(a = a.value) and baumWelch = list(b = b.value) respectively. All other arguments are passed on to both functions.

References

Humburg, P. and Bulger, D. and Stone, G. Parameter estimation for robust HMM analysis of ChIP-chip data. BMC Bioinformatics 2008, 9:343

See Also

baumWelch, viterbiTraining, hmm.setup

Examples

Run this code
## create two state HMM with t distributions
state.names <- c("one","two")
transition <- c(0.035, 0.01)
location <- c(1, 2)
scale <- c(1, 1)
df <- c(4, 6)
hmm1 <- getHMM(list(a=transition, mu=location, sigma=scale, nu=df),
    state.names)

## generate observation sequences from model
obs.lst <- list()
for(i in 1:50) obs.lst[[i]] <- sampleSeq(hmm1, 100)

## fit an HMM to the data (with fixed degrees of freedom)
hmm2 <- hmm.setup(obs.lst, state=c("one","two"), df=5)
hmm2.fit <- viterbiEM(hmm2, obs.lst, max.iter=c(5,15), verbose=2, df=5)

Run the code above in your browser using DataLab