Impaired neural encoding of naturalistic audiovisual speech in autism

Tallennettuna:
Bibliografiset tiedot
Julkaisussa:NeuroImage vol. 318 (Sep 2025)
Päätekijä: Vanneau, Theo
Muut tekijät: Crosse, Michael J., Foxe, John J., Molholm, Sophie
Julkaistu:
Elsevier Limited
Aiheet:
Linkit:Citation/Abstract
Full Text
Full Text - PDF
Tagit: Lisää tagi
Ei tageja, Lisää ensimmäinen tagi!

MARC

LEADER 00000nab a2200000uu 4500
001 3246084708
003 UK-CbPIL
022 |a 1053-8119 
022 |a 1095-9572 
024 7 |a 10.1016/j.neuroimage.2025.121397  |2 doi 
035 |a 3246084708 
045 2 |b d20250901  |b d20250930 
084 |a 221628  |2 nlm 
100 1 |a Vanneau, Theo  |u The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA 
245 1 |a Impaired neural encoding of naturalistic audiovisual speech in autism 
260 |b Elsevier Limited  |c Sep 2025 
513 |a Journal Article 
520 3 |a Visual cues from a speaker’s face can significantly improve speech comprehension in noisy environments through multisensory integration (MSI)—the process by which the brain combines auditory and visual inputs. Individuals with Autism Spectrum Disorder (ASD), however, often show atypical MSI, particularly during speech processing, which may contribute to the social communication difficulties central to the diagnosis. Understanding the neural basis of impaired MSI in ASD, especially during naturalistic speech, is critical for developing targeted interventions. Most neurophysiological studies have relied on simplified speech stimuli (e.g., isolated syllables or words), limiting their ecological validity. In this study, we used high-density EEG and linear encoding and decoding models to assess the neural processing of continuous audiovisual speech in adolescents and young adults with ASD ( N = 23) and age-matched typically developing controls ( N = 19). Participants watched and listened to naturalistic speech under auditory-only, visual-only, and audiovisual conditions, with varying levels of background noise, and were tasked with detecting a target word. Linear models were used to quantify cortical tracking of the speech envelope and phonetic features. In the audiovisual condition, the ASD group showed reduced behavioral performance and weaker neural tracking of both acoustic and phonetic features, relative to controls. In contrast, in the auditory-only condition, increasing background noise reduced behavioral and model performance similarly across groups. These results provide, for the first time, converging behavioral and neurophysiological evidence of impaired multisensory enhancement for continuous, natural speech in ASD. Significance Statement In adverse hearing conditions, seeing a speaker's face and their facial movements enhances speech comprehension through a process called multisensory integration, where the brain combines visual and auditory inputs to facilitate perception and communication. However, individuals with Autism Spectrum Disorder (ASD) often struggle with this process, particularly during speech comprehension. Previous findings using simple, discrete stimuli do not fully explain how the processing of continuous natural multisensory speech is affected in ASD. In our study, we used natural, continuous speech stimuli to compare the neural processing of various speech features in individuals with ASD and typically developing (TD) controls, across auditory and audiovisual conditions with varying levels of background noise. Our findings showed no group differences in the encoding of auditory-alone speech, with both groups similarly affected by increasing levels of noise. However, for audiovisual speech, individuals with ASD displayed reduced neural encoding of both the acoustic envelope and the phonetic features, marking neural processing impairment of continuous audiovisual multisensory speech in autism. 
653 |a Child development 
653 |a Speech perception 
653 |a Comprehension 
653 |a Noise 
653 |a Young adults 
653 |a Communication 
653 |a Autism 
653 |a Brain 
653 |a Encoding (Cognitive process) 
653 |a Continuous speech 
653 |a Syllables 
653 |a Speech 
653 |a Visual stimuli 
653 |a Adolescence 
653 |a Phonetic features 
653 |a Adolescents 
653 |a Hearing 
653 |a Communication disorders 
653 |a Information processing 
653 |a Phonetics 
653 |a Sensory integration 
653 |a Acoustic phonetics 
653 |a Childhood 
653 |a Neural coding 
653 |a Reduction (Phonological or Phonetic) 
653 |a Linear analysis 
653 |a Medical diagnosis 
653 |a Models 
653 |a Encoding 
653 |a Tracking 
653 |a Behavior 
653 |a Stimuli 
653 |a Cues 
653 |a Speeches 
653 |a Adults 
653 |a Acoustics 
653 |a Electroencephalography 
653 |a Facial movements 
653 |a Decoding 
653 |a Groups 
700 1 |a Crosse, Michael J.  |u The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA 
700 1 |a Foxe, John J.  |u The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA 
700 1 |a Molholm, Sophie  |u The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx 10461, NY, USA 
773 0 |t NeuroImage  |g vol. 318 (Sep 2025) 
786 0 |d ProQuest  |t Health & Medical Collection 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3246084708/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3246084708/fulltext/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3246084708/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch