I am excited to be one the speakers at this year's Deep Learning Summer School in Montreal on the 6th August 2016. Slides can be found here: slides link. And the abstract is below. Building Machines that Imagine and Reason: Principles and Applications … Continue reading Talk: Building Machines that Imagine and Reason
I gave a talk entitled 'Bayesian Reasoning and Deep Learning' recently. Here is the abstract and the slides for interest. Slides Bayesian Reasoning and Deep Learning Abstract Deep learning and Bayesian machine learning are currently two of the most active … Continue reading Bayesian Reasoning and Deep Learning
Throughout this series, we have discussed deep networks by examining prototypical instances of these models, e.g., deep feed-forward networks, deep auto-encoders, deep generative models, but have not yet interrogated the key word we have been using. We have not posed the question what … Continue reading A Statistical View of Deep Learning (VI): What is Deep?
We now routinely build complex, highly-parameterised models in an effort to address the complexities of modern data sets. We design our models so that they have enough 'capacity', and this is now second nature to us using the layer-wise design principles of … Continue reading A Statistical View of Deep Learning (V): Generalisation and Regularisation
Recurrent neural networks (RNNs) are now established as one of the key tools in the machine learning toolbox for handling large-scale sequence data. The ability to specify highly powerful models, advances in stochastic gradient descent, the availability of large volumes of … Continue reading A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems
With the success of discriminative modelling using deep feedforward neural networks (or using an alternative statistical lens, recursive generalised linear models) in numerous industrial applications, there is an increased drive to produce similar outcomes with unsupervised learning. In this post, I'd like to explore the connections between denoising auto-encoders as a leading approach for unsupervised learning in deep learning, and density estimation in statistics. The statistical view I'll explore casts learning in denoising auto-encoders as that of inference in latent factor (density) models. Such a connection has a number of useful benefits and implications for our machine learning practice.
The NIPS 2014 Workshop on Advances in Variational Inference was abuzz with new methods and ideas for scalable approximate inference. The concluding event of the workshop was a lively debate with David Blei, Neil Lawrence, Zoubin Ghahramani, Shinichi Nakajima and Matthias Seeger on the history, trends and open questions in variational inference. One of the questions posed to our panel and audience was: 'what are your variational inference tricks-of-the-trade?'
My current best-practice at present includes: stochastic approximation, Monte Carlo estimation, amortised inference and powerful software tools. But this is a though-provoking question that has has motivated me think in some more detail through my current variational inference tricks-of-the-trade, which are:
Continue reading "Variational Inference: Tricks of the Trade"
I recently received some queries on our paper: S. Mohamed, K. Heller and Z. Ghahramani. Bayesian and L1 Approaches for Sparse Unsupervised Learning. International Conference on Machine Learning (ICML), June 2012 [cite key="mohamed2012sparse"]. The questions were very good and I thought it would be useful to post these for future reference. The paper looked at Bayesian and optimisation approaches for learning sparse models. For Bayesian models, we advocated the use of spike-and-slab sparse models and specified an adapted latent Gaussian model with an additional set of discrete latent variables to specify when a latent dimension is sparse or not. This … Continue reading Bayesian sparsity using spike-and-slab priors