Title: The perceptual and cerebral representation of natural sounds
Abstract: Every day, we are exposed to largely diverse natural sounds (e.g., whooshes, chirps, impulses, rapidly modulated harmonic signals) and recognise sound generating objects and events in the environment (e.g., a gust of wind, birds, a nail being hammered, someone speaking). Understanding their perception and neural representation has been the goal of my research since the beginning of my career. I will start this talk with a brief overview of my research portfolio ranging from the psychophysics of impacted and walked-upon objects and of musical timbre, to the neuroimaging of multisensory processes (MEG) and of emotions in voices (fMRI+MEG). I will then present two studies aiming to disentangle the relative importance of acoustic and semantic information in the perception and neuroimaging (fMRI) of natural sounds. Finally, I will describe in detail a recent analysis of behavioural and neuroimaging (fMRI) data that contrasted the representation of a wide range of computational models, ranging from acoustics to natural language processing models, to sound-to-event deep neural networks.