The Jensen-Shannon Similarity for two topics \(\bm z_{i}\) and
\(\bm z_{j}\) is calculated by
$$JS(\bm z_{i}, \bm z_{j}) = 1 - \left( KLD\left(\bm p_i, \frac{\bm p_i + \bm p_j}{2}\right) + KLD\left(\bm p_j, \frac{\bm p_i + \bm p_j}{2}\right) \right)/2$$
$$= 1 - KLD(\bm p_i, \bm p_i + \bm p_j)/2 - KLD(\bm p_j, \bm p_i + \bm p_j)/2 - \log(2)$$
with \(V\) is the vocabulary size, \(\bm p_k = \left(p_k^{(1)}, ..., p_k^{(V)}\right)\),
and \(p_k^{(v)}\) is the proportion of assignments of the
\(v\)-th word to the \(k\)-th topic. KLD defines the Kullback-Leibler
Divergence calculated by
$$KLD(\bm p_{k}, \bm p_{\Sigma}) = \sum_{v=1}^{V} p_k^{(v)} \log{\frac{p_k^{(v)}}{p_{\Sigma}^{(v)}}}.$$
There is an epsilon added to every \(n_k^{(v)}\), the count
(not proportion) of assignments to ensure computability with respect to zeros.