A talk that explores the convergence of deep learning and Bayesian inference. We'll take a statistical tour of deep learning, think about approximate Bayesian inference, and explore the idea of doing inference-with-memory and the different ways that this manifests itself in contemporary machine learning.

# Slides

See the slides using this link.

# Abstract

Deep learning and Bayesian machine learning are currently two of the most active areas of machine learning research. Deep learning provides a powerful class of models and an easy framework for learning that now provides state-of-the-art methods for applications ranging from image classification to speech recognition. Bayesian reasoning provides a powerful approach for knowledge integration, inference, and decision making that has established it as the key tool for data-efficient learning, uncertainty quantification and robust model composition, widely-used in applications ranging from information retrieval to large-scale ranking. Each of these research areas has shortcomings that can be effectively addressed by the other, pointing towards a needed convergence of these two areas of machine learning and one that enhances our machine learning practice.

One powerful outcome of this convergence is our ability to develop systems for probabilistic inference with memory. A memory-based inference amortises the cost of probabilistic reasoning by cleverly reusing prior computations. To explore this, we shall take a statistical tour of deep learning, re-examine latent variable models and approximate Bayesian inference, and make connections to denoising auto-encoders and other stochastic encoder-decoder systems. In this way, we will make sense of what memory in inference might mean, and highlight the use of amortised inference in many other parts of machine learning.