nbm_em(count, alpha, beta, wght, NBM_NIT_MAX = 250, NBM_TOL = 0.01)
In E-step, the posterior probability is evaluated using NB density functions with initialized $\alpha_k$, $\beta_k$, and $\pi_k$. In the CM step, $\pi_k$ is evaluated first followed by Newton updates of $\alpha_k$ and $\beta_k$. EM iteration terminates when the percetnage of increase of log likelihood drop below NBM_TOL
, which is deterministic since EM is guaranteed to converge. For more details, please see the manuscript of RIPSeeker.
X. L. Meng, D. B. Rubin, Maximum likelihood estimation via the ECM algorithm: A general framework, Biometrika, 80(2):267-278 (1993).
J. A. Fessler, A. O. Hero, Space-alternating generalized expectation-maximization algorithm, IEEE Tr. on Signal Processing, 42(10):2664 -2677 (1994).
Capp'e, O. (2001). H2M : A set of MATLAB/OCTAVE functions for the EM estimation of mixtures and hidden Markov models. (
nbh_init, nbh, nbh.GRanges, nbh_em
# Simulate data
TRANS_s <- matrix(c(0.9, 0.1, 0.3, 0.7), nrow=2, byrow=TRUE)
alpha_s <- c(2, 4)
beta_s <- c(1, 0.25)
Total <- 1000
x <- nbh_gen(TRANS_s, alpha_s, beta_s, Total);
N <- 2
cnt <- x$count
label <- x$label
Total <- length(cnt)
# dummy initialization
wght0 <- c(0.5,0.5)
alpha0 <- c(1, 20)
beta0 <- c(1, 1)
NIT_MAX <- 50
TOL <- 1e-100
# initialize param with nbm
nbm <- nbm_em(cnt, alpha0, beta0, wght0, NIT_MAX, TOL)
map.accuracy <- length(which(max.col(nbm$postprob) == label))/Total
print(map.accuracy)
Run the code above in your browser using DataLab