Monthly Archives: September 2009

When is first-order better than second-order?

Dave and I were recently talking about Asuncion et al.’s wonderful recent paper “On Smoothing and Inference for Topic Models.” One thing that caught our eye was the CVB0 inference method for topic models, which is described as a first-order approximation of the collapsed variational Bayes approach. The odd thing is that this first-order approximation performs better than other, more “principled” approaches. I want to try to understand why. Here’s my current less-than-satisfactory stab:

Let me just lay out the problem. Suppose I want to approximate the marginal posterior over topic assignments z_i in a topic model given the observed words w, p(z_i | w). We can expand this probability using an expectation,

p(z_i | w) = \mathbb{E}_{p(z_{-i} | w)}[p(z_i | z_{-i}, w)].

We can’t compute the expectation analytically, so we must turn to an approximate inference technique. One technique is to run a Gibbs sampler whence we get S samples from the joint posterior over topic assignments z^{(1)}, z^{(2)}, \ldots, z^{(S)}. Then using these samples we approximate the expectation,

p(z_i | w) \approx \frac{1}{S}\sum_s p(z_i | z^{(s)}_{-i}).

In the case of LDA, this conditional probability is proportional to

p(z_i | z^{(s)}_{-i}) \propto \frac{N^{\lnot i}_{wk} + \eta}{N^{\lnot i}_k + V \eta} (N^{\lnot i}_{dk} + \alpha),


  • \alpha is the Dirichlet hyperparameter for topic proportion vectors;
  • \eta is the Dirichlet hyperparameter for the topic multinomials;
  • N^{\lnot i}_{wk} is the number of times topic k has been assigned to word w;
  • N^{\lnot i}_{k} is the number of times topic k has been assigned overall;
  • N^{\lnot i}_{dk} is the number of times topic k has been assigned in document d.

Note that the above counts do not include the current value of z_i (hence the superscript).

Instead of the Gibbs sampling approach, we could also approximate the expectation by taking a first order approximation (which we denote \gamma_i),

\mathbb{E}[p(z_i | z_{-i})] \approx \gamma_i \propto \frac{\mathbb{E}[N^{\lnot i}_{wk}] + \eta}{\mathbb{E}[N^{\lnot i}_k] + V \eta} (\mathbb{E}[N^{\lnot i}_{dk}] + \alpha),

where the expectations are taken with respect to p(z_{-i} | w). Because the terms in the expectations are simple sums, they can be computed solely as functions of \gamma. For example,

\mathbb{E}[N^{\lnot i}_k] = \mathbb{E}[\sum_{j \ne i} z_{jk}] = \sum_{j \ne i} \mathbb{E}[z_{jk}] = \sum_{j \ne i} p(z_j) \approx \sum_{j \ne i} \gamma_j.

Thus, the solution to this approximation is exactly the CVB0 technique described in the paper. Note that I never directly introduced the concept of a variational distribution! CVB0 is simply a first-order approximation to the true expectations; in contrast, the second-order CVB approximation is an approximation of the variational expectations. So maybe that’s the answer to the puzzle: sometimes a first-order approximation to the true value is better than a second-order approximation to a surrogate objective.

Does anyone have any other explanations?


Filed under Uncategorized

R LDA package minor update: 1.0.1

Dave gently reminded me that properly assessing convergence of our models is important and that just running a sampler for N iterations is unsatisfactory. I agree wholeheartedly. As a first step, the collapsed Gibbs sampler in the R LDA package can now optionally report the log likelihood (to within a constant). For example, we can rerun the model fit in demo(lda) but with an extra flag set:

result <- lda.collapsed.gibbs.sampler(cora.documents,
                                      K,  ## Num clusters
                                      25,  ## Num iterations

Using the now-available variable result$log.likelihoods, we can plot the progress of the sampler versus iteration:

log likelihood as a function of iteration

Grab it while it’s hot:


Filed under Uncategorized