Machine Learning Trick of the Day (2): Gaussian Integral Trick

Today's trick, the Gaussian integral trick, is one that allows us to re-express a (potentially troublesome) function in an alternative form, in particular, as an integral of a Gaussian against another function — integrals against a Gaussian turn out not to be too troublesome and can provide many statistical and computational benefits. One popular setting where we can exploit such an alternative representation is for inference in discrete undirected graphical models (think Boltzmann machines or discrete Markov random fields). In such cases, this trick lets us transform our discrete problem into one that has an underlying continuous (Gaussian) representation, which we can then solve using our other machine learning tricks. But this is part of a more general strategy that is used throughout machine learning, whether in Bayesian posterior analysis, deep learning or kernel machines. This trick has many facets, and this post explores the Gaussian integral trick and its more general form, auxiliary variable augmentation.

Gaussian integral trick state expansion.
Gaussian integral trick state expansion.

Gaussian Integral Trick

The Gaussian integral trick is one of the statistical flavour and allows us to turn a function that is an exponential in into an exponential that is linear in . We do this by augmenting a linear function with auxiliary variables and then integrating over these auxiliary variables, hence a form of auxiliary variable augmentation. The simplest form of this trick is to apply the following identity:

We can prove this to ourselves by exploiting our knowledge of Gaussian distributions (which this looks strikingly similar to) and our ability to complete the square when we see such quadratic forms. Separating out the scaling factor we get:

Which by completing the square becomes:

where the last integral is solved by matching it to a Gaussian with mean and variance , which we know has a normalisation of — this last step shows how this trick got its name.

The 'Gaussian integral trick' was coined and initially described by Hertz et al. [Ch10, pg 253] [cite key=hertz], and is closely related to the Hubbard-Stratonovich transform (which provides the augmentation for ).

Transforming Binary MRFs

This trick is also valid in the multivariate case, which is what we will most often be interested in. One good place to see this trick in action is when applied to binary MRFs or Boltzmann machines. Binary MRFs have a joint probability, for binary random variables :

where Z, is the normalising constant. The (multivariate) Gaussian integral trick can be applied to the quadratic term in this energy function allowing for an insightful analysis and interesting reparameterisation that allows for alternative inference methods to be used. For example:

Variable Augmentation

Graphical model for a general augmentation.
Graphical model for a general augmentation.

This trick is a special case of a more general strategy called variable (or data) augmentation — I prefer variable augmentation to data augmentation [cite key=chib], since it will not be confused with observed data preprocessing and manipulation. In this setting, the introduction of auxiliary variables has been most often used to develop better mixing Markov chain Monte Carlo samplers. This is because after augmentation, the conditional distributions of the model often have highly convenient and easy-to-sample-from forms.

One recent example of variable augmentation (and that parallels our initial trick) is the Polya-Gamma variable augmentation. In this case, we can express the sigmoid function that appears when computing the mean of the Bernoulli distribution, as:

where has a Polya-Gamma distribution [cite key=polson]. This nicely transforms the sigmoid into a Gaussian convolution (integrated against a Polya-Gamma random variable) — and gives us a different type of Gaussian integral trick. In fact, similar Gaussian integral tricks are abound, and are typically described under the heading of Gaussian scale-mixture distributions.

There are many examples of variable augmentation to be found, especially for binary and categorical distributions. Much guidance is available, and some papers that demonstrate this are:

Summary

The Gaussian integral trick is just one from a large class of variable augmentation strategies that are widely used in statistics and machine learning. They work by introducing auxiliary variables into our problems that induce an alternative representation, and that then give us additional statistical and computational benefits. Such methods lie at the heart of efficient inference algorithms, whether these be Monte Carlo or deterministic approximate inference schemes, making variable augmentation a favourite in our box of machine learning tricks.

[bibsource file=http://www.shakirm.com/blog-bib/trickOfTheDay/gaussIntegralTrick.bib]

Leave a Reply

Your email address will not be published. Required fields are marked *