login
Home / Papers / A distributional code for value in dopamine-based reinforcement learning

A distributional code for value in dopamine-based reinforcement learning

441 Citations2020
Will Dabney, Zeb Kurth‐Nelson, Naoshige Uchida

An account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning is proposed, suggesting that the brain represents possible future rewards not as a single mean of stochastic outcomes, as in the canonical model, but instead as a probability distribution.

Abstract

Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain<sup>1-3</sup>. According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning<sup>4-6</sup>. We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning.