We care about open science. Because our research is almost entirely publicly funded, and because all our permanent and non-permanent positions are civil servants part of the public sector, we believe it is a moral commitment to make our results accessible for free at all levels of society. This includes publications (open-access) but also research material, stimuli, data, and software.
Click on the banners below for direct access to the PDFs of the papers;
on for a link to the project’s code hosted on the group’s github page;
and on for a link to the associated research data (stimuli and results).
Our overt objective with the following is that 100% of our publications are associated with all three banners. If one is missing for a specific project, there’s a good chance that we’re working on it - do feel free to request as you need.
In addition to resources specific to papers below, we also develop and maintain a number of open-source research software, including reverse-correlation toolbox CLEESE and vocal-feedback plateform DAVID, which can be found on our Resources page.
Aligning the smiles of dating dyads causally increases attraction
Arias-Sarah, P., Bedoya, D., Daube, C., Aucouturier, JJ., Hall, L., Johansson, P.
Proceedings of the National Academy of Sciences 121 (45), e2400369121
Emotional contagion to vocal smile revealed by combined pupil reactivity and motor resonance
Merchie, A., Ranty, Z., Aguillon-Hernandez, N., Aucouturier, JJ., Wardak, C., Gomot, M.
Scientific Reports 14 (1), 25043
Neural adaptation to changes in self-voice during puberty
Pinheiro, A., Aucouturier, JJ & Kotz, Sonja.
Trends in Neurosciences, August 2024.
Cortical responses to looming sources are explained away by the auditory periphery
Benghanem, S., Guha, R., Pruvost-Robieux, E., Levi-Strauss, J., Joucla, C., Cariou, A., Gavaret, M. & Aucouturier, J. J.
Cortex. Vol. 177, 2024.
A simple psychophysical procedure separates representational and noise components in impairments of speech prosody perception after right-hemisphere stroke
Adl Zarrabi, A., Jeulin, M., Bardet, P., Commère, P., Naccache, L., Aucouturier, J. J. & Villain, M..
Scientific Reports volume 14 (15194), 2024.
Mmm whatcha say? Uncovering distal and proximal context effects in first and second-language word perception using psychophysical reverse correlation
Tuttösi, P., Yeung, H.H., Wang, Y., Wang, F., Denis, G., Aucouturier, JJ. & Lim, A.
Interspeech, 2024.
Social affective inferences in the era of AI filters: towards the Bayesian reshaping of human sociality?
Guerouaou, N., Vaiva, G. & Aucouturier, JJ.
OSF Preprints, 2024.
Intact Representation of Vocal Smile in Autism: A reverse correlation approach
Merchie, A., Ranty, Z., Adl Zarrabi, A., Bonnet-Brilhault, F., Houy-Durand, E., Aucouturier, JJ. & Gomot, M.
PsyArXiv Preprints, 2024.
Pupil dilation reflects the dynamic integration of audiovisual emotional speech
Arias Sarah, P., Hall, L., Saitovitch, A., Aucouturier, J. J., Zilbovicius, M., & Johansson, P.
Scientific reports, 13(1), 5507, 2023.
Algorithmic voice transformations reveal the phonological basis of language-familiarity effects in cross-cultural emotion judgments
Nakai, T., Rachman, L., Arias Sarah, P., Okanoya, K., & Aucouturier, J.J.
Plos one, 18(5), e0285028, 2023.
Combining GAN with reverse correlation to construct personalized facial expressions
Yan, S., Soladié, C., Aucouturier, J. J., & Seguier, R.
Plos one, 18(8), e0290612, 2023.
The implicit influence of pitch contours and emotional timbre on P300 components in an own-name oddball paradigm
Pruvost-Robieux, E., Joucla, C., Benghanem, S., Guha, R., Liuni, M., Gavaret, M. & Aucouturier, JJ
bioRxiv 2023.11.30.569381.
The psychophysics of empathy: Using reverse-correlation to quantify the overlap between self & other representations of emotional expressions
Zaied, S, Soladié, C. & Aucouturier, J.J.
PsyArXiv rdmve, 2023.
Cracking the pitch code of music-motor synchronization using data-driven methods
Migotti, L., Decultot, Q., Grailhe, P. & Aucouturier, J. J.
PsyArXiv zkbn3, 2023.
Three simple steps to improve the interpretability of EEG-SVM studies
Coralie Joucla, Damien Gabriel, Juan-Pablo Ortega & Emmanuel Haffen
Journal of Neurophysiology, 2022
It’s not what you say, it’s how you say it: a retrospective study of the impact of prosody on own-name P300 in comatose patients
Estelle Pruvost-Robieux, Nathalie André-Obadia, Angela Marchi, Tarek Sharshar, Marco Liuni, Martine Gavaret & Jean-Julien Aucouturier
Clinical Neurophysiology, vol. 135, 2022
The shallow of your smile: the ethics of expressive vocal deep-fakes
Nadia Guerouaou, Guillaume Vaiva & JJ Aucouturier
Philosophical Transations of the Royal Society B, vol. 377 (1841), 2021
Even violins can cry: specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music
Daniel Bedoya, Pablo Arias, Laura Rachman, Marco Liuni, Clément Canonne, Louise Goupil & JJ Aucouturier
Philosophical Transations of the Royal Society B, vol. 376(1840), 2021
Facial mimicry in the congenitally blind
Pablo Arias, Caren Bellmann & JJ Aucouturier
Current Biology, vol. 31(19), PR1112-R1114 (2021)
Distinct signatures of subjective confidence and objective accuracy in speech prosody
Louise Goupil & JJ Aucouturier
Cognition, vol. 212, 104661 (2021)
Vocal signals only impact speakers’ own emotions when they are self-attributed
Louise Goupil, Petter Johansson, Lars Hall & JJ Aucouturier
Consciousness & Cognition, vol. 88, 103072 (2021)
Listeners perception of certainty and honesty of another speaker is associated with a common prosodic signature
Louise Goupil, Emmanuel Ponsot, Daniel Richardson, Gabriel Reyes & JJ Aucouturier
Nature Communications, vol. 12, 861 (2021)
Note: Articles published before 2020 correspond to work conducted in the CREAM music neuroscience team in IRCAM. We list here a selection of publications that are important to our current research. For a complete list of publications on music cognition and vocal emotions from the CREAM team (2016-2020), see the CREAM archive page. For even earlier work on machine learning and audio signal processing, see JJA’s Google Scholar page.
CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition
Juan Jose Burred, Emmanuel Ponsot, Louise Goupil, Marco Liuni & JJ Aucouturier
PLoS one, 14(4), e0205943, 2019
Cracking the social code of speech prosody using reverse correlation
Emmanuel Ponsot, Juan Jose Burred, Pascal Belin & JJ Aucouturier
Proceedings of the National Academy of Sciences, vol 115 (15) 3972-3977, 2018
Uncovering mental representations of smiled speech using reverse correlation
Emmanuel Ponsot, Pablo Arias & JJ Aucouturier
Journal of the Acoustical Society of America, vol 143 (1), 2018.
DAVID: An open-source platform for real-time transformation of infra-segmental emotional cues in running speech
Laura Rachman, Marco Liuni, Pablo Arias, Andreas Lind, Petter Johansson, Lars Hall, Daniel Richardson, Katsumi Watanabe, Stéphanie Dubal & JJ Aucouturier
Behaviour Research Methods, vol. 50(1), 323–343, 2017
Covert Digital Manipulation of Vocal Emotion Alter Speakers’ Emotional State in a Congruent Direction
JJ Aucouturier, Petter Johansson, Lars Hall, Rodrigo Segnini, Lolita Mercadié & Katsumi Watanabe
Proceedings of the National Academy of Sciences, vol. 113 no. 4, 2016
The documents listed here are available for downloading and have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be re-posted without the explicit permission of the copyright holder.