Functional brain networks predicting sustained attention are not specific to perceptual modality

Uloženo v:
Podrobná bibliografie
Vydáno v:Network Neuroscience vol. 9, no. 1 (2025), p. 303
Hlavní autor: Corriveau, Anna
Další autoři: Jin, Ke, Terashima, Hiroki, Kondo, Hirohito M, Rosenberg, Monica D
Vydáno:
MIT Press Journals, The
Témata:
On-line přístup:Citation/Abstract
Full Text - PDF
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!

MARC

LEADER 00000nab a2200000uu 4500
001 3185249526
003 UK-CbPIL
022 |a 2472-1751 
024 7 |a 10.1162/netn_a_00430  |2 doi 
035 |a 3185249526 
045 2 |b d20250101  |b d20250331 
100 1 |a Corriveau, Anna 
245 1 |a Functional brain networks predicting sustained attention are not specific to perceptual modality 
260 |b MIT Press Journals, The  |c 2025 
513 |a Journal Article 
520 3 |a Sustained attention is essential for daily life and can be directed to information from different perceptual modalities, including audition and vision. Recently, cognitive neuroscience has aimed to identify neural predictors of behavior that generalize across datasets. Prior work has shown strong generalization of models trained to predict individual differences in sustained attention performance from patterns of fMRI functional connectivity. However, it is an open question whether predictions of sustained attention are specific to the perceptual modality in which they are trained. In the current study, we test whether connectome-based models predict performance on attention tasks performed in different modalities. We show first that a predefined network trained to predict adults’ visual sustained attention performance generalizes to predict auditory sustained attention performance in three independent datasets (N1 = 29, N2 = 60, N3 = 17). Next, we train new network models to predict performance on visual and auditory attention tasks separately. We find that functional networks are largely modality general, with both model-unique and shared model features predicting sustained attention performance in independent datasets regardless of task modality. Results support the supposition that visual and auditory sustained attention rely on shared neural mechanisms and demonstrate robust generalizability of whole-brain functional network models of sustained attention.Author Summary: While previous work has demonstrated external validity of functional connectivity-based networks for the prediction of cognitive and attentional performance, testing generalization across visual and auditory perceptual modalities has been limited. The current study demonstrates robust prediction of sustained attention performance, regardless of perceptual modality models are trained or tested in. Results demonstrate that connectivity-based models may generalize broadly, capturing variance in sustained attention performance that is agnostic to the perceptual modality of model training. 
653 |a Visual tasks 
653 |a Datasets 
653 |a Auditory tasks 
653 |a Visual perception 
653 |a Functional magnetic resonance imaging 
653 |a Performance prediction 
653 |a Predictions 
653 |a Attention 
653 |a Hearing 
653 |a Brain 
653 |a Networks 
653 |a Sensory integration 
653 |a Robustness 
653 |a Neural networks 
653 |a Neurosciences 
700 1 |a Jin, Ke 
700 1 |a Terashima, Hiroki 
700 1 |a Kondo, Hirohito M 
700 1 |a Rosenberg, Monica D 
773 0 |t Network Neuroscience  |g vol. 9, no. 1 (2025), p. 303 
786 0 |d ProQuest  |t Biological Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3185249526/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3185249526/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch