About us

We’re an interdisciplinary group of scientists exploring control-systems approaches to the analysis of human sensory electrophysiology.

The brain is a real-time dynamical system. Yet, clinical neurophysiology (and, in truth, much of cognitive neuroscience, from ERPs to fMRI) mostly treats it as a static averaging machine, which behavior is analysed and diagnosed from its averaged response over trials rather than from its real-time, single-trial variance.

Our vision is to create a control-systems revolution in human clinical neurophysiology, by bringing to patients automatized stimulation techniques that are able to adapt in real time to their brain dynamics in order to diagnose, prognose and treat neurological and psychiatric disorders.

Our work is both experimental (we collect our own data, using tools borrowed from the fields of electrophysiology and psychophysics) and computational (we build our own software methods to generate experimental stimuli using signal processing and analyse physiological recordings using data-driven system modeling). We work both with healthy participants and patients, but our primary focus is to create the next generation of clinical methods in the fields of neurology and psychiatry for pathologies such as coma, stroke, autism spectrum and post-traumatic stress disorders.

The Femto neuro group is part of the larger System Data Science team, a concentration of 7 faculty working on data-driven analysis, pronostic and health management of natural, industrial and environmental systems (head: Prof. Jean-Marc Nicod), and based in the Department of Automation and Robotics of the FEMTO-ST Institute (CNRS/Université de Bourgogne Franche-Comté) in Besançon, France.


Our current work explores two complementary lines of research:

Neurophysiological system identification using reverse correlation

In recent years, inspired by work in data-driven face psychophysics by Rachael Jack and Philippe Schyns at the University of Glasgow, Frédéric Gosselin at the University of Montreal and others, our lab has developped a new research paradigm combining speech signal processing and psychophysical reverse-correlation (PNAS 2018, Nature Communications 2021).

Our current focus is to extend these approaches to not only reverse-correlate stimulus features on overt behavioural responses, but also on electrophysiological responses extracted from EEG, EMG and autonomic activity. As an application, we are currently collaborating with intensivists at GHU Paris Psychiatrie et Neurosciences to construct individualized sound stimuli that are optimized to measure EEG markers of consciousness in comatose patients.

For this work, we develop and maintain the open-source reverse-correlation toolbox CLEESE - see our Resources page for details.

Closed-loop vocal feedback

Another line of research builds on our recently-introduced “vocal feedback” paradigm, in which we use real-time voice technology to let participants read a text out loud while their voice is being manipulated without their knowing. We (with Petter Johansson and Lars Hall at Lund University, and Katsumi Watanabe at Waseda) find that speakers who hear themselves read with a happier or sadder tone of voice also became happier or sadder as a result. (PNAS 2016, Consciousness & Cognition 2021)

Our current focus is to extend this paradigm to “close the loop” and incorporate a feedback controller that is able to adapt the voice transformation to the real-time measure of participant voice. As an application, we are currently collaborating with psychiatrists at the Lille University Hospital and Centre National de Resources et de Résilience to investigate the use of vocal feedback during exposition therapy in post-traumatic stress disorder (PTSD) patients.

For this work, we develop and maintain the open-source vocal feedback software DAVID - see our Resources page for details.


The neuro group was established in the FEMTO-ST Institute in Jan. 2021. The group moved from its previous installment as the CREAM music neuroscience team at the Science and Technology of Music and Sound Lab (STMS, IRCAM/CNRS/Sorbonne Université) in IRCAM, Paris, France.

This page archives the CREAM team’s key publications, team members and where each of these people went.


Finally, credit were due: the code for this website is an open-source repo on our github page. This is a trick we learned (and forked!) from the faultless KordingLab and their own excellent lab website at the University of Pennsylvania. Many thanks, and yeah for #opensource #openscience. In turn, we welcome everyone to fork and adapt this site for their own work.