Dave and I were recently talking about Asuncion et al.’s wonderful recent paper “On Smoothing and Inference for Topic Models.” One thing that caught our eye was the CVB0 inference method for topic models, which is described as a first-order approximation of the collapsed variational Bayes approach. The odd thing is that this first-order approximation performs better than other, more “principled” approaches. I want to try to understand why. Here’s my current less-than-satisfactory stab:
Let me just lay out the problem. Suppose I want to approximate the marginal posterior over topic assignments in a topic model given the observed words
,
We can expand this probability using an expectation,
We can’t compute the expectation analytically, so we must turn to an approximate inference technique. One technique is to run a Gibbs sampler whence we get samples from the joint posterior over topic assignments
Then using these samples we approximate the expectation,
In the case of LDA, this conditional probability is proportional to
where
is the Dirichlet hyperparameter for topic proportion vectors;
is the Dirichlet hyperparameter for the topic multinomials;
-
is the number of times topic
has been assigned to word
;
-
is the number of times topic
has been assigned overall;
-
is the number of times topic
has been assigned in document
Note that the above counts do not include the current value of (hence the superscript).
Instead of the Gibbs sampling approach, we could also approximate the expectation by taking a first order approximation (which we denote ),
where the expectations are taken with respect to Because the terms in the expectations are simple sums, they can be computed solely as functions of
. For example,
Thus, the solution to this approximation is exactly the CVB0 technique described in the paper. Note that I never directly introduced the concept of a variational distribution! CVB0 is simply a first-order approximation to the true expectations; in contrast, the second-order CVB approximation is an approximation of the variational expectations. So maybe that’s the answer to the puzzle: sometimes a first-order approximation to the true value is better than a second-order approximation to a surrogate objective.
Does anyone have any other explanations?
Hi Jonathan,
How are you?
This derivation is related to what Yee Whye, Dave and I called the “Callen equations” in our CVB paper.
One thing perhaps to mention is that this is only an approximation of the implicit equation that still needs to be iterated to convergence. Hence, errors can build up and the final answer can be very wrong. Instead, a variational approximation has an approximation to the objective function which may therefore have more value.
Another question: what is the second order approximation of what you call “the true expectations”? Isn’t that exactly equal to the CVB equations as well? I guess I am not quite clear what the difference is between true and variational expectations or how they are defined.
So the second order approximation has two parts, one is expanding the conditional probabilities to second order, the other is evaluating its expectation. The first part is easy, it amounts to a covariance matrix and some weighting.
In the CVB paper, one computes the covariance matrix with respect to the variational distribution (because it’s not easy to do so with respect to the true distribution). Since there’s no need to bring in a variational distribution for the definition of CVB0, it’s unclear how you would extend it to second order tractably.
Pingback: Chang et al. (2009) Reading Tea Leaves: How Humans Interpret Topic Models « LingPipe Blog