A Statistical View of Deep Learning: Retrospective

Over the past 6 months, I've taken to writing a series of posts (one each month) on a statistical view of deep learning with two principal motivations in mind. The first was as a personal exercise to make concrete and to test the limits of the way that I think about, and use deep learning in my every day work. The second, was to highlight important statistical connections and implications of deep learning that I do not see being made in the popular courses, reviews and books on deep learning, but which are extremely important to keep in mind. Post Links and Summary Links to each post with a short … Continue reading A Statistical View of Deep Learning: Retrospective

Chinese Edition: A statistical View of Deep Learning (I)/ 从统计学角度来看深度学习

Colleagues from the Capital of Statistics, an online statistics community in China, have been kind enough to translate my first post in this series, A statistical View of Deep Learning (I): Recursive GLMs,  in the hope that they might be of interest to machine learning and statistics researchers in China (and to Chinese readers). Find it here: 从统计学角度来看深度学习(1):递归广义线性模型   Continue reading Chinese Edition: A statistical View of Deep Learning (I)/ 从统计学角度来看深度学习

A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems

Recurrent neural networks (RNNs) are now established as one of the key tools in the machine learning toolbox for handling large-scale sequence data. The ability to specify highly powerful models, advances in stochastic gradient descent, the availability of large volumes of … Continue reading A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems

A Statistical View of Deep Learning (II): Auto-encoders and Free Energy

With the success of discriminative modelling using deep feedforward neural networks (or using an alternative statistical lens, recursive generalised linear models) in numerous industrial applications, there is an increased drive to produce similar outcomes with unsupervised learning. In this post, I'd like to explore the connections between denoising auto-encoders as a leading approach for unsupervised learning in deep learning, and density estimation in statistics. The statistical view I'll explore casts learning in denoising auto-encoders as that of inference in latent factor (density) models. Such a connection has a number of useful benefits and implications for our machine learning practice.

Continue reading "A Statistical View of Deep Learning (II): Auto-encoders and Free Energy"

A Statistical View of Deep Learning (I): Recursive GLMs

Deep learning and the use of deep neural networks [cite key="bishop1995neural"] are now established as a key tool for practical machine learning. Neural networks have an equivalence with many existing statistical and machine learning approaches and I would like to explore one of these views in this post. In particular, I'll look at the view of deep neural networks as recursive generalised linear models (RGLMs). Generalised linear models form one of the cornerstones of probabilistic modelling and are used in almost every field of experimental science, so this connection is an extremely useful one to have in mind. I'll focus here on what are called feedforward neural networks and leave a discussion of the statistical connections to recurrent networks to another post.

Continue reading "A Statistical View of Deep Learning (I): Recursive GLMs"

Variational Inference: Tricks of the Trade

The NIPS 2014 Workshop on Advances in Variational Inference was abuzz with new methods and ideas for scalable approximate inference. The concluding event of the workshop was a lively debate with David Blei, Neil Lawrence, Zoubin Ghahramani, Shinichi Nakajima and Matthias Seeger on the history, trends and open questions in variational inference. One of the questions posed to our panel and audience was: 'what are your variational inference tricks-of-the-trade?'

My current best-practice at present includes: stochastic approximation, Monte Carlo estimation, amortised inference and powerful software tools. But this is a though-provoking question that has has motivated me think in some more detail through my current variational inference tricks-of-the-trade, which are:
Continue reading "Variational Inference: Tricks of the Trade"