So in reading group today Chernoff bounds came up.  The bound basically says that if we sample random variables and sum them we are very likely to be close to the expected sum.  Estimating partition functions is a central challenge in many machine learning applications, so couldn’t we just sample and estimate the sum and have the bound guarantee that we’re not far off?

To wit, let Y_1, Y_2, \ldots, Y_n be independent random variables bounded between 0 and B.  Let S = \sum Y_i.   Then P(S - \mathbb{E}[S] \ge n\epsilon) \le \exp(-\frac{2n\epsilon^2}{B}). 

Now let us adapt this to estimating the partition function, Z.  Let Y_i \equiv \mu(X_i), where \mu is some measure bounded above by B. Note that Y_i is a random variable that satisfies the aforementioned conditions.  Also note that Z = \sum \mu(X_i) = \sum Y_i and that \mathbb{E}[\mu(X_i)] = \frac{Z}{|\mathscr{X}|}.  This means that we can write the condition in the inequality as \frac{\hat{Z} - Z}{|\mathscr{X}|} \ge \epsilon, where \hat{Z} is the empirically estimated expected value of \mu.  This seems pretty good doesn’t it?

So let’s see what happens when we do things empirically.  I randomly generated Ising models by sampling canonical parameters (using the standard overspecified representation) from a N(0, 1) distribution for both individual and pair parameters.  In total, there were 40 nodes.  I estimate the partition function (or rather, the partition function divided by 2^40) using the above technique and plot the estimate as a function of iteration over 300 iterations.  I restarted the estimation 5 times to see if the different runs would converge to similar values.   Attached are said plots for 2 different models (10 runs of 300 iterations total).  

Monte carlo estimates of an Ising model partition function (1/2)

Monte carlo estimates of an Ising model partition function (2/2)

It seems like this estimate does horrendously.  You might as well use a random number generator =).  The key problem is that there are sudden jumps in our estimates.  When there’s enough coupling in the model, the modes become very peaky.   Another way of looking at it is by examining the bound on the right.  Recall that this term depends on the bound on the measure, B.  Unfortunately, this bound can be quite large; for the parameters I’ve chosen, it is something in the neighborhood of \exp(\frac{40 * 41}{2 * 2} | N(0, 1) | ) and expands exponentially in the number of nodes in the graph.  

So there we have it, Hoeffding’s inequality can only be applied when there isn’t much coupling and the bound on the measure is small.  As the bound increases, Hoeffding’s inequality gives us exponentially worse guarantees; unfortunately, the strongly coupled case is precisely the interesting case!


1 Comment

Filed under Uncategorized

One response to “{Hoeffd|Is}ing

  1. Indraneel

    Testing latex comments: e^{i\pi} + 1 = 0

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s