Deep Neural Network-Based Modeling of Multimodal Human–Computer Interaction in Aircraft Cockpits

Kaydedildi:
Detaylı Bibliyografya
Yayımlandı:Future Internet vol. 17, no. 3 (2025), p. 127
Yazar: Wang, Li
Diğer Yazarlar: Zhang, Heming, Wang, Changyuan
Baskı/Yayın Bilgisi:
MDPI AG
Konular:
Online Erişim:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiketler: Etiketle
Etiket eklenmemiş, İlk siz ekleyin!
Diğer Bilgiler
Özet:Improving the performance of human–computer interaction systems is an essential indicator of aircraft intelligence. To address the limitations of single-modal interaction methods, a multimodal interaction model based on gaze and EEG target selection is proposed using deep learning technology. This model consists of two parts: target classification and intention recognition. The target classification model based on long short-term memory networks is established and trained by combining the eye movement information of the operator. The intention recognition model based on transformers is constructed and trained by combining the operator’s EEG information. In the application scenario of the aircraft radar page system, the highest accuracy of the target classification model is 98%. The intention recognition rate obtained by training the 32-channel EEG information in the intention recognition model is 98.5%, which is higher than other compared models. In addition, we validated the model on a simulated flight platform, and the experimental results show that the proposed multimodal interaction framework outperforms the single gaze interaction in terms of performance.
ISSN:1999-5903
DOI:10.3390/fi17030127
Kaynak:ABI/INFORM Global