Neuroscience, AI and Psychology
Our research group integrates ideas and methods from systems neuroscience, experimental psychology and machine learning to understand how the heterogeneity of neuron types endows brain circuits with efficient distributed computations underlying behavior and cognition. We develop and experimentally validate new models of distributed computation in brain circuits including a new understanding of how dopamine-based distributed reinforcement learning could be implemented in the brain. To answer these questions, we perform large scale electrophysiological recordings in rodents performing complex behavioral tasks, build new data analysis tools and develop new classes of biologically plausible neural networks.
Cognitive computations:
From computational models to neural representations
A key challenge for understanding the neural mechanisms underlying the abstract cognitive variables studied in psychology and cognitive science is our ability to measure them using non-verbal reports (Ott*, Masset* & Kepecs, CSHL Symp, 2018). To achieve this, we need mechanistic cognitive models that describe how these variables vary across different tasks and contexts with task parameters that we can control as experimenters, just as one can measure tuning to the orientation of different visual stimuli. We applied this approach to clarify the relationship of a proposed behavioral readout with sunk costs (Ott*, Masset* et al, Science Advances, 2022) and to show that representations in OFC single neurons have the properties of an abstract confidence representations (Masset*, Ott* et al., Cell, 2020).
Distributed reinforcement learning in the brain
Our work has extended the classic reward prediction error framework for dopamine-based reinforcement learning to understand how recent advances in machine learning can help us understand the functional heterogeneity of dopamine signaling (Masset & Gershman, In The Handbook of Dopamine, 2025). We have investigated how neural representations control learning dynamics (Bordelon, Masset, et al., NeurIPS, 2023). Particularly, we have shown that dopamine neurons are implementing multi-timescale reinforcement learning and that that individual dopaminergic neurons have a characteristic discount factor that dictates how they compute the prediction error (Masset* et al., Nature, 2025).
Structure of representations in neural networks
Neural representations can be understood by applying concepts of efficient representations from machine learning and optimization (Choudhary, et al., ArXiv, 2025). Irrespective of the structure of the coding space, we can characterize neural codes based on the relative distance of features, the representational geometry. We have developed biologically plausible neural network models to understand how the neural code controls the representational geometry for models of spiking networks (Masset et al., NeurIPS, 2022), olfactory processing (Zavatone-Veth, Masset et al., NeurIPS, 2023) and head-direction systems (Pjanovic, et al., BioRxiv, 2025). Importantly, the mapping of single neurons to a given geometry need not be unique, potentially explaining observations of drifting neural representations (Masset et al., Biological Cybernetics, 2022).
Computational tools to analyse behavior and neural representations
Modern neuroscience techniques allow the recording of hundreds to thousands of neurons in animals performing behavioral tasks. We have developed several computational tools to understand the structure of neural populations and further characterize the tuning of single neurons. We have developed analysis tools based on matrix decomposition that functionally identify neural populations based on their activity dynamics during behavioral tasks (Hirokawa et al., Nature, 2019). We have adapted methods developed in his research group for interpretable deep learning based on algorithm unrolling to deconvolve multiplexed signals in neural responses, including those of dopamine neurons (Tolooshams et al., Neuron, 2025).