Learning in Brains and Machines (1): Temporal Differences 9

· Read in 10 minutes · 1800 words · collected posts ·

We all make mistakes, and as is often said, only then can we learn. Our mistakes allow us to gain insight, and the ability to make better judgements and fewer mistakes in future. In their influential paper, the neuroscientists Robert Rescorla and Allan Wagner put this more succinctly, 'organisms only learn when events violate their expectations' [1]. And so too of learning in machines. In both brains and machines we learn by trading the currency of violated expectations: mistakes that are represented as prediction errors.

We rely on predictions to aid every part of our decision-making. We make predictions about the position of objects as they fall to catch them, the emotional state of other people to set the tone of our conversations, the future behaviour of economic indicators, and of the potentially adverse effects of new medical treatments. Of the multitude of prediction problems that exist, the prediction of rewards is one of the most fundamental and one that brains are especially good at. This post explores the neuroscience and mathematics of rewards, and the mutual inspirations these fields offer us for the understanding and design of intelligent systems.

Associative Learning in the Brain

A reward, like the affirmation of a parent or the pleasure of eating something sweet, is the positive value we associate to events that are in some way beneficial (for survival or pleasure). We quickly learn to identify states and actions that lead to rewards, and constantly adapt our behaviours to maximise them. This is an ability known as associative learning and leads us, in the framework of Marr's levels of analysis, to an important computational problem: what is the principle of associative learning in brains and machines?

As modern-day neuroscientists, the tools with which we can interrogate the function of the brain are many: psychological experiments, functional magnetic resonance imaging, pharmacological interventions, single-cell neural recordings, fast-scan cyclic voltammetry, optogenetics; all provide us with a rich implementation-level understanding of associative learning at many levels of granularity. A tale of discovery using these tools unfolds.

It begins with the famous psychological experiments of Pavlov and his dogs. Pavlov's experiments identify two types of behavioural phenomena—Pavlovian or classical conditioning, and instrumental conditioning—which in turn identify two distinct reward-prediction problems. Pavlovian conditioning exposes our ability to make predictions of future rewards given a cue (like the sound of a bell that indicates food); instrumental conditioning shows we can predict and select actions that lead to future rewards. When functional MRI is used to obtain blood oxygen-level dependent (BOLD) contrast images of humans performing classical or instrumental conditioning tasks, one particular area of the brain, the striatum, stands out [2].

Dopamine pathways in the brain.

Dopamine pathways in the brain [Wikimedia].

The striatum is special since it is a major target of the neurotransmitter dopamine, and leads to the sneaking suspicion that dopamine plays an important role in reward-based learning. This suspicion is substantiated through a series of pharmacological experiments using neuroleptic drugs (dopamine antagonists). Without dopamine the the normal function of associative learning is noticeably impeded [3]. Neurons in the brain that use dopamine as a neurotransmitter are referred to as dopaminergic neurons. Such neurons are concentrated in the midbrain and are a part of several pathways. The nigrostriatal pathway links the substantia nigra (SN) with the striatum, and the mesolimbic pathway links the ventral tegmental area (VTA) with the forebrain (see image).

When single-cell recordings of dopaminergic neurons are made from awake monkeys as they reach for a rewarding sip of juice, after seeing a cue (e.g., a light), there is a distinct dopaminergic response [4]. When this reward is first experienced, there is a clear response from dopaminergic neurons, but this fades after a number of trials. The implication is that dopamine is not a representation of reward in the brain. Instead, dopamine was proposed as a means of representing the error made in predicting rewards. A means of causally manipulating multiple neurons is needed to verify such a hypothesis. Optogenetics offers  exactly such a tool, and provides a means stimulating precise neural-firing patterns in collections of neurons and observing them by creating a light-activated ion channel. Using optogenetic activation, dopamine neurons were selectively triggered in rats to mimic the effect of positive prediction error, and allowed a causal link between prediction errors, dopamine and learning to be established [5].

And so goes the reward-prediction error hypothesis: 'the fluctuating delivery of dopamine from the VTA to cortical and subcortical target structures in part delivers information about prediction errors between the expected amount of reward and the actual reward' [6]. There isn't unanimous acceptance of this hypothesis and  incentive salience is one alternative [7]. There is a need for a review that includes the recent evidence that has accumulated, but the existing surveys remain insightful:

The reward-prediction error hypothesis is compelling. With support from ever-increasing evidence, we have established the computational problem of associative learning in the brain, an algorithmic solution that predicates learning on prediction errors, and an implementation in brains through the phasic activity of dopaminergic neurons. How can this learning strategy can be used by machines?

Value Learning

Reinforcement learning—the machine learning of rewards—provides us with the mathematical framework with which we can understand and provide an algorithmic specification of the computational problem of associative learning. As agents in the world, we observe the state of the world s, can take an action a, and observe rewards r, and we do this continually (a perception-action loop). In reinforcement learning the associative learning problem is known as value learning, and involves learning about two types of value function: state value functions and state-action value functions.

\textrm{State value: }V^\pi(s_0) = \sum_t \gamma^t\mathbb{E}_{p_{\pi}(a_t | s_t)p(s_t)}\left[r_t \right]

\textrm{State-action value: }Q^\pi(s_0, a_0) = \sum_{t} \gamma^t\mathbb{E}_{p_{\pi}(a_t | s_t)p(s_t)}\left[r_t \right]


Simplified perception-action loop.

These equations embody our computational problem:

  • We say that the state-value V of a state s_0 is the discounted sum of all the expected future rewards r_t that we receive by taking actions a_t and reaching state s_t.
  • Our actions a_t are chosen by a policy \pi that defines how we behave.
  • A discount factor \gamma is introduced, that is between 0 and 1, to include a preference for immediate rewards over distant rewards (and makes the sum finite for bounded rewards).
  • The expectation is with respect to the policy p_\pi(a_t | s_t) and the marginal state distribution p(s_t) (representing all the noisy state transitions and ways in which actions can be selected).
  • The state-action value Q is similar, and is the value we associate with state s_0, when our first action is a_0, but all subsequent actions are taken following the behaviour policy \pi.

Like in classical conditioning, the state value function V allows us to predict future rewards when the right stimuli are present. Like in instrumental conditioning, the state-action value function Q allows us to select the next action that will allow us to obtain the most reward in future.

In general,  there are three ingredients needed for a machine learning solution of a learning problem: a model, a learning objective and an algorithm. There are many types of models we can use to represent value functions: a non-parametric model that maintains a table of values for every state explicitly, a nearest neighbour method or Gaussian process; or a parametric model with parameters \theta, such as a deep neural network, a spline model or other basis-function methods.

If we take one step in our environment and move from state s_t to state s_{t+1}, the value function should satisfy the following consistency criterion:

V(s_t) = r(s_t,a_t) + \gamma V(s_{t+1})

Q(s_t, a_t) = r_t + \gamma \max_a Q(s_{t+1}, a)

The self-consistency property of the value function is what we use to obtain our learning objective. The simplest way to do this is to use a measure of discrepancy between the two sides of the equation. For the state value function, we get:

\mathcal{L} = \left(r(s_t,a_t) + \gamma V(s_{t+1}) -V(s_t) \right)^2

This is known as the squared Bellman residual [8], and importantly, introduces a prediction error that will drive reward-based learning.

We derive a learning algorithm by gradient descent on this objective function. Consider state-value estimation; we obtain two types of descent algorithm, depending on how we treat the term \nu = r(s_t,a_t) + \gamma V(s_{t+1}), the right-hand side of the Bellman consistency equation.

  • Firstly, we can think of \nu as a fixed target for the regression of s_t to V(s_t), in which case \nu has no parameters.
  • In the second case, \nu is not fixed, and instead forms a residual model that itself has parameters we wish to optimise.

As a result, we obtain two update rules for gradient descent depending on which approach we take, a direct gradient and a residual gradient:

\textrm{Temporal difference: }\delta_t=r(s_t,a_t)+\gamma V(s_{t+1}) -V(s_t)

\textrm{Direct gradient: }\nabla_\theta \mathcal{L} =\delta_t\nabla_\theta V(s_t)

\textrm{Residual gradient: }\nabla_\theta \mathcal{L} =\delta_t\left(\gamma \nabla_\theta V(s_{t+1}) -\nabla_\theta V(s_t) \right)

\textrm{Parameter update: }\theta^{new} = \theta + \eta\nabla_\theta \mathcal{L}

The term \delta_t is the temporal difference (TD)—an error in the prediction of rewards at two time points—that quantifies the extent to which our expectations (value predictions) have been violated. Early in learning, our TD errors will be high, and as we learn the TD error fades, shadowing the response of dopamine in the brain (see image).

Reward prediction errors are high when first encountered, but diminish over time. [Niv, 2009]

Reward prediction errors are high when first encountered, but diminish over time. [Niv, 2009]

We have derived the famous temporal difference learning algorithm in reinforcement learning [9], which is the stochastic update rule for parameters of a value function using gradients of the squared Bellman residual. And all this can be repeated for the state-action value function, yielding the Q-learning algorithm. This is a powerful mathematical framework in which to derive a range of more sophisticated reward-based learning systems:

  • N-step TD methods
    • Instead of forming the consistency equations by rewriting the sum using a reward obtained after 1-step, we can use N-steps of rewards. These are the TD(\lambda) algorithms.
  • Fitted and Deep TD methods 
    • Fitted value function methods make use of rich parametric models and learn these parameters by TD learning. Both Neural fitted Q-learning (NFQ) and Deep Q-Networks (DQN) use deep neural networks to represent the value function, examples of deep reinforcement learning.
  • Faster reward learning and alternative algorithms

Final Words

The computational problem of associative learning in the brain is paralleled by value learning in machines. Both brains and machines make long-term predictions about rewards, and use prediction errors to learn about rewards and how to take optimal actions. This correspondence is one of the great instances of the mutual inspiration that neuroscience and machine learning offer each other, more of which we'll explore in other posts in this series.

Some References
[1] Robert A Rescorla, Allan R Wagner, others, A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement, Classical conditioning II: Current research and theory, 1972
[2] John O'Doherty, Peter Dayan, Johannes Schultz, Ralf Deichmann, Karl Friston, Raymond J Dolan, Dissociable roles of ventral and dorsal striatum in instrumental conditioning, Science, 2004
[3] L Stein, Chemistry of reward and punishment, Psychopharmacology. A Review of Progress, 1968
[4] Wolfram Schultz, Paul Apicella, Eugenio Scarnati, Tomas Ljungberg, Neuronal activity in monkey ventral striatum related to the expectation of reward, The Journal of Neuroscience, 1992
[5] Elizabeth E Steinberg, Ronald Keiflin, Josiah R Boivin, Ilana B Witten, Karl Deisseroth, Patricia H Janak, A causal link between prediction errors, dopamine neurons and learning, Nature neuroscience, 2013
[6] P Read Montague, Peter Dayan, Terrence J Sejnowski, A framework for mesencephalic dopamine systems based on predictive Hebbian learning, The Journal of neuroscience, 1996
[7] Kent C Berridge, The debate over dopamine’s role in reward: the case for incentive salience, Psychopharmacology, 2007
[8] Leemon Baird, others, Residual algorithms: Reinforcement learning with function approximation, Proceedings of the twelfth international conference on machine learning, 1995
[9] Richard S Sutton, Andrew G Barto, Reinforcement learning: An introduction, , 1998

9 thoughts on “Learning in Brains and Machines (1): Temporal Differences

  1. Pingback: Some places to start learning AI & ML | Thilo on Data

  2. Pingback: Learning in Brains and Machines (4): Episodic and Interactive Memory ← The Spectator

  3. Pingback: Learning in Brains and Machines (3): Synergistic and Modular Action ← The Spectator

  4. Reply nkhahleng May 14,2016 6:16 pm

    Amazing article indeed

  5. Pingback: Learning in Brains and Machines (2): The Dogma of Sparsity ← The Spectator

  6. Pingback: 2016/02/29 ML Reddit – cuponthetop

  7. Reply Djébril Mokaddem Feb 25,2016 10:53 pm

    Thanks for this amazing post.

    Translating the math into English in your posts would be terrific !

    Cheers !

  8. Reply Billy Feb 22,2016 11:51 am

    Amazing and very clear.

  9. Reply Julian de Wit Feb 22,2016 10:22 am

    Thanks for this clear article.

Leave a Reply