Learning in Brains and Machines

This page will collect all the posts associated with the series Learning in Brains and Machines. I'll soon add more context and discussion of the general framework I'll use and further motivation.

  1. Temporal differences. 
    • We explore the computational problem of associative learning in the brain, and how this is paralleled by value learning in machines, which allows both brains and machines make long-term predictions about rewards, and use prediction errors to learn about rewards and take optimal actions.
  2. The dogma of sparsity.
    • Sparsity is an important computational strategy used by both brains and machines that allows for computation using efficient representations, efficient for memory-formation, energy usage and reasoning. This post explores the evidence and manifestations of sparsity in brains and machines.
  3. Synergistic and modular action.
    • Modularity and temporal abstraction are important computational strategies that allow fast learning, strong generalisation and flexible action. This post contrasts action synergies in the brain with options in hierarchical reinforcement learning, looking at the biological evidence available and the ways in which the action selection approaches in the previous posts can be modified to incorporate this more general principle.
  4. Episodic and interactive memory.
    • Autobiographical, or episodic, memory is an important tool for rapid learning and efficient use of experience, and is part of the suite of complementary learning systems that is used in the brain. This complementarity of learning systems is paralleled in the the brain through the use of non-parametric and semi-parametric models. This post unpacks episodic memory and learning and brains and the breadth of statistical modelling approaches used in machines.