Cognitive Machine Learning: Prologue 1

Cognitive Machine Learning: Prologue
· Read in · 980 words · All posts in series  · of inspiration is one thing we do not lack in machine learning. This is what, for me at least, makes  machine learning research such a rewarding and exciting area to work in. We gain inspiration from our traditional neighbours in statistics, signal processing and control engineering, information theory and ...

Talk: Building Machines that Imagine and Reason 2

Talk: Building Machines that Imagine and Reason
I am excited to be one the speakers at this year's Deep Learning Summer School in Montreal on the 6th August 2016. Slides can be found here: slides link. And the abstract is below. Building Machines that Imagine and Reason: Principles and Applications of Deep Generative Models Deep generative models provide a solution to the problem of unsupervised ...

Learning in Brains and Machines (4): Episodic and Interactive Memory

Learning in Brains and Machines (4): Episodic and Interactive Memory
· Read in · 2080 words · All posts in series  · memory, like yours, exerts a powerful influence over my interaction with the world. It is reconstructive and evocative; I can easily form an image of hot December days in the South African summer, and remember my first time—it was morning on an Easter Sunday in April a few years ago—that I saw and felt snow. My ...

Learning in Brains and Machines (3): Synergistic and Modular Action 2

Learning in Brains and Machines (3): Synergistic and Modular Action
· Read in · 1796 words · All posts in series  ·  is a dance—precisely choreographed and executed—that we perform throughout our lives. This is the dance formed by our movements. Our movements are our actions and the final outcome of our decision making processes. Single actions are built into reusable sequences, sequences are composed into complex routines, routines are arranged into elegant choreographies, and so the complexity of ...

Learning in Brains and Machines (2): The Dogma of Sparsity 1

Learning in Brains and Machines (2): The Dogma of Sparsity
· Read in · 1700 words · collected posts · functioning of our brains, much like the intrigue of a political drama, is a neuronal house-of-cards. The halls of cognitive power are run by recurring alliances of neurons that deliberately conspire to control information processing and decision making. 'Suspicious coincidences' in neural activation—as the celebrated neuroscientist Horace Barlow observed—are abound; transparency in neural ...

Learning in Brains and Machines (1): Temporal Differences 9

Learning in Brains and Machines (1): Temporal Differences
· Read in · 1800 words · collected posts · We all make mistakes, and as is often said, only then can we learn. Our mistakes allow us to gain insight, and the ability to make better judgements and fewer mistakes in future. In their influential paper, the neuroscientists Robert Rescorla and Allan Wagner put this more succinctly, 'organisms only ...

A Year of Approximate Inference: Review of the NIPS 2015 Workshop 3

A Year of Approximate Inference: Review of the NIPS 2015 Workshop
inference lies no longer at the fringe. The importance of how we connect our observed data to the assumptions made by our statistical models—the task of inference—was a central part of this year's Neural Information Processing Systems (NIPS) conference: Zoubin Ghahramani's opening keynote set the stage and the pageant of inference included forecasting, compression, decision making, personalised modelling, and automating the ...

Machine Learning Trick of the Day (6): Tricks with Sticks 2

Machine Learning Trick of the Day (6): Tricks with Sticks
· Read in  · Our first encounters with probability are often through a collection of simple games. The games of probability are played with coins, dice, cards, balls and urns, and sticks and strings. Using these games, we built an intuition that allows us to reason and act in ways that account for randomness in the world. But ...

Machine Learning Trick of the Day (5): Log Derivative Trick 5

Machine Learning Trick of the Day (5): Log Derivative Trick
Machine learning involves manipulating probabilities. These probabilities are most often represented as normalised-probabilities or as log-probabilities. An ability to shrewdly alternate between these two representations is a vital step towards strengthening the probabilistic dexterity we need to solve modern machine learning problems. Today's trick, the log derivative trick, helps us to do just this, using the property of the derivative of the logarithm. This ...

Talk: Memory-based Bayesian Reasoning and Deep Learning

Talk: Memory-based Bayesian Reasoning and Deep Learning
A talk that explores the convergence of deep learning and Bayesian inference.  We'll take a statistical tour of deep learning, think about approximate Bayesian inference, and explore the idea of doing inference-with-memory and the different ways that this manifests itself in contemporary machine learning. Slides See the slides using this link. Abstract Deep learning and Bayesian ...