Deep Neural Network-Based Modeling of Multimodal Human–Computer Interaction in Aircraft Cockpits

Salvato in:
Dettagli Bibliografici
Pubblicato in:Future Internet vol. 17, no. 3 (2025), p. 127
Autore principale: Wang, Li
Altri autori: Zhang, Heming, Wang, Changyuan
Pubblicazione:
MDPI AG
Soggetti:
Accesso online:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 3181453752
003 UK-CbPIL
022 |a 1999-5903 
024 7 |a 10.3390/fi17030127  |2 doi 
035 |a 3181453752 
045 2 |b d20250101  |b d20251231 
084 |a 231464  |2 nlm 
100 1 |a Wang, Li  |u School of Electronic & Electrical Engineering, Baoji University of Arts and Sciences, Baoji 721016, China 
245 1 |a Deep Neural Network-Based Modeling of Multimodal Human–Computer Interaction in Aircraft Cockpits 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Improving the performance of human–computer interaction systems is an essential indicator of aircraft intelligence. To address the limitations of single-modal interaction methods, a multimodal interaction model based on gaze and EEG target selection is proposed using deep learning technology. This model consists of two parts: target classification and intention recognition. The target classification model based on long short-term memory networks is established and trained by combining the eye movement information of the operator. The intention recognition model based on transformers is constructed and trained by combining the operator’s EEG information. In the application scenario of the aircraft radar page system, the highest accuracy of the target classification model is 98%. The intention recognition rate obtained by training the 32-channel EEG information in the intention recognition model is 98.5%, which is higher than other compared models. In addition, we validated the model on a simulated flight platform, and the experimental results show that the proposed multimodal interaction framework outperforms the single gaze interaction in terms of performance. 
653 |a Aircraft 
653 |a Software 
653 |a Eye movements 
653 |a Accuracy 
653 |a Classification 
653 |a Recognition 
653 |a Artificial neural networks 
653 |a Aviation 
653 |a Aircraft performance 
653 |a Interaction models 
653 |a Methods 
653 |a Machine learning 
653 |a Speech 
700 1 |a Zhang, Heming  |u School of Optoelectronic Engineering, Xi’an Technological University, Xi’an 710000, China; <email>xatu_zhangheming@163.com</email> 
700 1 |a Wang, Changyuan  |u School of Computer Science, Xi’an Technological University, Xi’an 710021, China; <email>cyw901@163.com</email> 
773 0 |t Future Internet  |g vol. 17, no. 3 (2025), p. 127 
786 0 |d ProQuest  |t ABI/INFORM Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3181453752/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3181453752/fulltextwithgraphics/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3181453752/fulltextPDF/embedded/H09TXR3UUZB2ISDL?source=fedsrch