Deep Neural Network-Based Modeling of Multimodal Human–Computer Interaction in Aircraft Cockpits

Guardado en:
Detalles Bibliográficos
Publicado en:Future Internet vol. 17, no. 3 (2025), p. 127
Autor principal: Wang, Li
Otros Autores: Zhang, Heming, Wang, Changyuan
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Improving the performance of human–computer interaction systems is an essential indicator of aircraft intelligence. To address the limitations of single-modal interaction methods, a multimodal interaction model based on gaze and EEG target selection is proposed using deep learning technology. This model consists of two parts: target classification and intention recognition. The target classification model based on long short-term memory networks is established and trained by combining the eye movement information of the operator. The intention recognition model based on transformers is constructed and trained by combining the operator’s EEG information. In the application scenario of the aircraft radar page system, the highest accuracy of the target classification model is 98%. The intention recognition rate obtained by training the 32-channel EEG information in the intention recognition model is 98.5%, which is higher than other compared models. In addition, we validated the model on a simulated flight platform, and the experimental results show that the proposed multimodal interaction framework outperforms the single gaze interaction in terms of performance.
ISSN:1999-5903
DOI:10.3390/fi17030127
Fuente:ABI/INFORM Global