· Read in · 1499 words · All posts in series · [dropcap]This[/dropcap] trick is unlike the others we've conjured. It will not reveal a clever manipulation of a probability or an integral or a derivative, or produce a code … Continue reading Machine Learning Trick of the Day (8): Instrumental Thinking
· Read in · 1481 words · All posts in series · [dropcap]A[/dropcap] probability on its own is often an uninteresting thing. But when we can compare probabilities, that is when their full splendour is revealed. By comparing probabilities we … Continue reading Machine Learning Trick of the Day (7): Density Ratio Trick
· Read in · 980 words · All posts in series · [dropcap]Sources[/dropcap] of inspiration is one thing we do not lack in machine learning. This is what, for me at least, makes machine learning research such a rewarding and exciting area to work in. We … Continue reading Cognitive Machine Learning: Prologue
· Read in · 2080 words · All posts in series · [dropcap]My[/dropcap] memory, like yours, exerts a powerful influence over my interaction with the world. It is reconstructive and evocative; I can easily form an image of hot December days in the South African summer, and remember my first time—it was … Continue reading Learning in Brains and Machines (4): Episodic and Interactive Memory
Colleagues from the Capital of Statistics, an online statistics community in China, have been kind enough to translate my first post in this series, A statistical View of Deep Learning (I): Recursive GLMs, in the hope that they might be of interest to machine learning and statistics researchers in China (and to Chinese readers). Find it here: 从统计学角度来看深度学习（1）：递归广义线性模型 Continue reading Chinese Edition: A statistical View of Deep Learning (I)/ 从统计学角度来看深度学习
Deep learning and the use of deep neural networks [cite key="bishop1995neural"] are now established as a key tool for practical machine learning. Neural networks have an equivalence with many existing statistical and machine learning approaches and I would like to explore one of these views in this post. In particular, I'll look at the view of deep neural networks as recursive generalised linear models (RGLMs). Generalised linear models form one of the cornerstones of probabilistic modelling and are used in almost every field of experimental science, so this connection is an extremely useful one to have in mind. I'll focus here on what are called feedforward neural networks and leave a discussion of the statistical connections to recurrent networks to another post.
Continue reading "A Statistical View of Deep Learning (I): Recursive GLMs"