Predicting Landing Position Deviation in Low-Visibility and Windy Environment Using Pilots’ Eye Movement Features

Guardado en:
Detalles Bibliográficos
Publicado en:Aerospace vol. 12, no. 6 (2025), p. 523
Autor principal: Li, Xiuyi
Otros Autores: Zhou, Yue, Zhao, Weiwei, Fu Chuanyun, Huang Zhuocheng, Li Nianqian, Xu, Haibo
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Eye movement features of pilots are critical for aircraft landing, especially in low-visibility and windy conditions. This study conducts simulated flight experiments concerning aircraft approach and landing under three low-visibility and windy conditions, including no-wind, crosswind, and tailwind. This research collects 30 participants’ eye movement data after descending from the instrument approach to the visual approach and measures the landing position deviation. Then, a random forest method is used to rank eye movement features and sequentially construct feature sets by feature importance. Two machine learning models (SVR and RF) and four deep learning models (GRU, LSTM, CNN-GRU, and CNN-LSTM) are trained with these feature sets to predict the landing position deviation. The results show that the cumulative fixation duration on the heading indicator, altimeter, air-speed indicator, and external scenery is vital for landing position deviation under no-wind conditions. The attention allocation required by approaches under crosswind and tailwind conditions is more complex. According to the MAE metric, CNN-LSTM has the best prediction performance and stability under no-wind conditions, while CNN-GRU is better for crosswind and tailwind cases. RF also performs well as per the RMSE metric, as it is suitable for predicting landing position errors of outliers.
ISSN:2226-4310
DOI:10.3390/aerospace12060523
Fuente:Advanced Technologies & Aerospace Database