Multimodal heterogeneous graph fusion for automated obstructive sleep apnea-hypopnea syndrome diagnosis

Sábháilte in:
Sonraí bibleagrafaíochta
Foilsithe in:Complex & Intelligent Systems vol. 11, no. 1 (Jan 2025), p. 44
Foilsithe / Cruthaithe:
Springer Nature B.V.
Ábhair:
Rochtain ar líne:Citation/Abstract
Full Text - PDF
Clibeanna: Cuir clib leis
Níl clibeanna ann, Bí ar an gcéad duine le clib a chur leis an taifead seo!

MARC

LEADER 00000nab a2200000uu 4500
001 3129239257
003 UK-CbPIL
022 |a 2199-4536 
022 |a 2198-6053 
024 7 |a 10.1007/s40747-024-01648-0  |2 doi 
035 |a 3129239257 
045 2 |b d20250101  |b d20250131 
245 1 |a Multimodal heterogeneous graph fusion for automated obstructive sleep apnea-hypopnea syndrome diagnosis 
260 |b Springer Nature B.V.  |c Jan 2025 
513 |a Journal Article 
520 3 |a Polysomnography is the diagnostic gold standard for obstructive sleep apnea-hypopnea syndrome (OSAHS), requiring medical professionals to analyze apnea-hypopnea events from multidimensional data throughout the sleep cycle. This complex process is susceptible to variability based on the clinician’s experience, leading to potential inaccuracies. Existing automatic diagnosis methods often overlook multimodal physiological signals and medical prior knowledge, leading to limited diagnostic capabilities. This study presents a novel heterogeneous graph convolutional fusion network (HeteroGCFNet) leveraging multimodal physiological signals and domain knowledge for automated OSAHS diagnosis. This framework constructs two types of graph representations: physical space graphs, which map the spatial layout of sensors on the human body, and process knowledge graphs which detail the physiological relationships among breathing patterns, oxygen saturation, and vital signals. The framework leverages heterogeneous graph convolutional neural networks to extract both localized and global features from these graphs. Additionally, a multi-head fusion module combines these features into a unified representation for effective classification, enhancing focus on relevant signal characteristics and cross-modal interactions. This study evaluated the proposed framework on a large-scale OSAHS dataset, combined from publicly available sources and data provided by a collaborative university hospital. It demonstrated superior diagnostic performance compared to conventional machine learning models and existing deep learning approaches, effectively integrating domain knowledge with data-driven learning to produce explainable representations and robust generalization capabilities, which can potentially be utilized for clinical use. Code is available at <ext-link xlink:href="https://github.com/AmbitYuki/HeteroGCFNet" ext-link-type="uri">https://github.com/AmbitYuki/HeteroGCFNet</ext-link>. 
653 |a Physiology 
653 |a Graphs 
653 |a Sleep apnea 
653 |a Artificial neural networks 
653 |a Graph neural networks 
653 |a Graph representations 
653 |a Oxygen content 
653 |a Diagnosis 
653 |a Multidimensional data 
653 |a Diagnostic systems 
653 |a Automation 
653 |a Machine learning 
653 |a Deep learning 
653 |a Multidimensional methods 
653 |a Graphical representations 
653 |a Knowledge representation 
773 0 |t Complex & Intelligent Systems  |g vol. 11, no. 1 (Jan 2025), p. 44 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3129239257/abstract/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3129239257/fulltextPDF/embedded/75I98GEZK8WCJMPQ?source=fedsrch