The neuro group was established in the FEMTO-ST Institute in Jan. 2021. The group moved from its previous installment as the CREAM music neuroscience team (2014-2020) at the Science and Technology of Music and Sound Lab (STMS, IRCAM/CNRS/Sorbonne Université) in IRCAM, Paris, France.
The CREAM team was originally funded by an ERC Starting Grant (Cracking the Emotional Code of Music - 335536, PI: JJ Aucouturier) and has then received subsequent funding by ANR REFLETS (2017-2021) and Fondation pour l’Audition (Prix d’Emergence scientifique 2018).
The research vision of the CREAM team was to bridge over the two disciplines of audio signal processing and cognitive neuroscience/psychology, and turn voice and music into cognitive technologies, with algorithms to create sounds able to selectively activate certain neural pathways, or certain emotions. To this aim, the project introduced several novel methodologies to study the effect of sound on the brain, in the form of acoustic transformation software (SMILE; DAVID, 2800 downloads as of Jan 2021; CLEESE, 950 downloads; ANGUS, 1200 downloads). These technologies formed the basis of experimental research articles notably published in PNAS (2016; 2018), Current Biology (2018; 2021) and Nature Communications (2021).
Over the course of the project (2014-2020), the CREAM team employed more than 15 PhD-level scientists, roughly half of them coming from a computer science or audio engineering background, and the other half from a psychology or cognitive neuroscience background - a diverse, amazing crew of young scientists which have since the end of the project moved on, several of them to start their own independent positions. Beyond academia, the software technologies produced in the project have also been transfered to the voice technology startup Alta Voce, which is now successfully testing the impact of real-time voice transformation in the field of customer relation.
The CREAM team/project ended in December 2020. This page archives the team’s key publications, team members and where each of these people went. The legacy of the CREAM team continues at the FEMTO-ST Neuro group, which was established in the FEMTO-ST Institute in Jan. 2021 on the basis of some of project CREAM’s key findings in psychophysical reverse-correlation and vocal feedback.
Click on the banners below for direct access to the PDFs of the papers.
Even violins can cry: specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music
Daniel Bedoya, Pablo Arias, Laura Rachman, Marco Liuni, Clément Canonne, Louise Goupil & JJ Aucouturier
Philosophical Transations of the Royal Society B, vol. 376(1840), 2021
Facial mimicry in the congenitally blind
Pablo Arias, Caren Bellmann & JJ Aucouturier
Current Biology, vol. 31(19), PR1112-R1114 (2021)
Distinct signatures of subjective confidence and objective accuracy in speech prosody
Louise Goupil & JJ Aucouturier
Cognition, vol. 212, 104661 (2021)
Emergent Shared Intentions Support Coordination During Collective Musical Improvisations
Louise Goupil, Thomas Wolf, Pierre Saint-Germier, JJ Aucouturier & Clément Canonne
Cognitive Science, vol. 45. (2021)
Vocal signals only impact speakers’ own emotions when they are self-attributed
Louise Goupil, Petter Johansson, Lars Hall & JJ Aucouturier
Consciousness & Cognition, vol. 88, 103072 (2021)
Listeners perception of certainty and honesty of another speaker is associated with a common prosodic signature
Louise Goupil, Emmanuel Ponsot, Daniel Richardson, Gabriel Reyes & JJ Aucouturier
Nature Communications, vol. 12, 861 (2021)
Beyond Correlation: Acoustic Transformation Methods for the Experimental Study of Emotional Voice and Speech
Pablo Arias, Laura Rachman, Marco Liuni & JJ Aucouturier
Emotion Review, vol 13 (1), 2020.
Realistic manipulation of facial and vocal smiles in real-world video streams
Pablo Arias, Catherine Soladié, Oussema Bouafif, Axel Röbel, Renaud Séguier & JJ Aucouturier
IEEE Transactions on Affective Computing, Vol. 11(3), 2020.
Neural entrainment to music is sensitive to melodic spectral complexity
Indiana Wollman, Pablo Arias, JJ Aucouturier & Benjamin Morillon
Journal of Neurophysiology, 123(3), 1063-1071, 2020.
Sound context modulates perceived vocal emotion
Marco Liuni, Emmanuel Ponsot, Greg Bryant & JJ Aucouturier
Behavioural Processes, vol 172, 104042, 2020
Vocal markers of pre-operative anxiety: a pilot study
Gilles Guerrier, Laurent Lellouch, Marco Liuni, Andrea Vaglio, Pierre-Raphaël Rothschild, Christophe Baillard & JJ Aucouturier
British Journal of Anaesthesia, vol 123(4), e486–e488, 2019.
Enjoy The Violence: Is appreciation for extreme music the result of cognitive control over the threat response system?
Rosalie Ollivier, Louise Goupil, Marco Liuni & JJ Aucouturier.
Music Perception, 37(2), 2019
Happy you, happy me: expressive changes on a stranger’s voice recruit faster implicit processes than self-produced expressions
Laura Rachman, Stéphanie Dubal & JJ Aucouturier
Social, Cognitive and Affective Neurosciences (SCAN), vol 14(5), 559–568, 2019.
CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition
Juan Jose Burred, Emmanuel Ponsot, Louise Goupil, Marco Liuni & JJ Aucouturier
PLoS one, 14(4), e0205943, 2019
Musical pleasure and musical emotions (Commentary on Ferreri et al., 2019)
Louise Goupil & JJ Aucouturier
Proceedings of the National Academy of Sciences, Vol. 116 (9), 3364-336628, 2019
Auditory smiles trigger unconscious facial imitations
Pablo Arias, Pascal Belin & JJ Aucouturier
Current Biology. Vol. 28 (4), PR782-R783, 2018
Cracking the social code of speech prosody using reverse correlation
Emmanuel Ponsot, Juan Jose Burred, Pascal Belin & JJ Aucouturier
Proceedings of the National Academy of Sciences, vol 115 (15) 3972-3977, 2018
Uncovering mental representations of smiled speech using reverse correlation
Emmanuel Ponsot, Pablo Arias & JJ Aucouturier
Journal of the Acoustical Society of America, vol 143 (1), 2018.
Musical friends and foes: the social cognition of affiliation and control in improvised interactions
JJ Aucouturier & Clément Canonne
Cognition, vol 161, 94–108, 2017
DAVID: An open-source platform for real-time transformation of infra-segmental emotional cues in running speech
Laura Rachman, Marco Liuni, Pablo Arias, Andreas Lind, Petter Johansson, Lars Hall, Daniel Richardson, Katsumi Watanabe, Stéphanie Dubal & JJ Aucouturier
Behaviour Research Methods, vol. 50(1), 323–343, 2017
Emergency medical triage decisions are swayed by computer-manipulated cues of physical dominance in caller’s voice
Laurent Boidron, Karim Boudenia, Christophe Avena, Jean-Michel Boucheix & JJ Aucouturier
Scientific Reports vol 6, 30219, 2016
Covert Digital Manipulation of Vocal Emotion Alter Speakers’ Emotional State in a Congruent Direction
JJ Aucouturier, Petter Johansson, Lars Hall, Rodrigo Segnini, Lolita Mercadié & Katsumi Watanabe
Proceedings of the National Academy of Sciences, vol. 113 no. 4, 2016
Who are they | Were in CREAM as | Where are they now |
---|---|---|
Vasso Zachari | Lab manager (2016 - 2020) | PhD Student in Historical anthropology, EHESS, FR |
Pablo Arias | PhD Student (2014 - 2018) | Postdoc (2019 - 2021) in Petter Johansson’s lab @ Lund University, SE |
Laura Rachman | PhD Student (2014 - 2018) | Postdoc (2019 - 2021) in Deniz Başkent’s lab @ University of Groningen, NL |
Marco Liuni | Postdoc (2014 - 2016), then research scientist (2017-2020) | Co-founder, CPO, Alta Voce, Paris, FR |
Louise Goupil | Postdoc (2016 - 2020) | CNRS Researcher (chargé de recherche, 2022~) in LPNC (Université Grenoble Alpes/Université Savoie Mont-Blanc), Grenoble, FR |
Emmanuel Ponsot | Postdoc (2016 - 2018) | CNRS Researcher (chargé de recherche, 2022~) in STMS Lab (IRCAM/CNRS/Sorbonne Université), Paris, FR |
Beau Sievers | Visiting PhD Student (2016) | Postdoc in Thalia Wheatley’s lab @ Dartmouth, US |
Thomas Wolf | Visiting PhD Student (2018) | Postdoc in Natalie Sebanz’s and Günther Knoblich’s lab in CEU, Vienna, AU |
Tomoya Nakai | Visting PhD Student (2015) | JSPS Postdoc in Jérome Prado’s lab, Lyon Neuroscience Research Center, FR |
Andreas Lind | Visiting postdoc (2015-2016) | Visiting Research Fellow, Dept of Cognitive Science, Lund University, SE |
Laurent Lellouch | Ingénieur d’étude (2019-2020) | Independant music instructor, Montpellier, FR |
Daniel Bedoya | Master student (2019) | PhD Student, STMS Lab (IRCAM/CNRS/Sorbonne Université), Paris, FR |
Rosalie Ollivier | Master student (2019) | Chargée d’étude de marché, Harris Interactive, Paris FR |
Lou Séropian | Master student (2018) | PhD Student, Lyon Neuroscience Research Center, FR |
Andrea Vaglio | Master student (2018) | PhD Student, Deezer, Paris, FR |
Mélissa Jeulin | Master student (2017) | Speech and Language Therapist, Paris, FR |
Sarah Hermann | Master student (2016) | Independent music producer |
Hugo Trad | Undergraduate placement student (2016) | Phd Student, SCIAM, Paris, FR |
Edgar Hemery | Master student (2014) | Founder, CEO, Embodme |