Design of eye control human–computer interaction interface based on smooth tracking

Guardado en:
Detalles Bibliográficos
Publicado en:SN Applied Sciences vol. 7, no. 11 (Nov 2025), p. 1346
Autor principal: Ding, Chuanfeng
Publicado:
Springer Nature B.V.
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3269829965
003 UK-CbPIL
022 |a 2523-3963 
022 |a 2523-3971 
024 7 |a 10.1007/s42452-025-07884-4  |2 doi 
035 |a 3269829965 
045 2 |b d20251101  |b d20251130 
100 1 |a Ding, Chuanfeng  |u Zhengzhou University of Science and Technology, College of Art, Zhengzhou, China (GRID:grid.512433.2) 
245 1 |a Design of eye control human–computer interaction interface based on smooth tracking 
260 |b Springer Nature B.V.  |c Nov 2025 
513 |a Journal Article 
520 3 |a A deep learning model based on smooth tracking, the adaptive lightweight eye control tracking Transformer, is proposed to address the accuracy and real-time issues of eye control human–computer interaction interfaces. By utilizing convolutional neural network and recurrent neural network frameworks, the adaptive lightweight eye control tracking Transformer demonstrates excellent performance through experimental validation on EYEDIAP, DUT, and GazeCapture datasets. On the EYEDIAP dataset, the raised model achieved a curve area value of 0.934 and specificity of 0.902, with a mean absolute error of only 0.065 and an average inference time of 12.453 ms. On the DUT dataset, the area under the curve value is 0.917, the specificity is 0.895, the mean absolute error is 0.062, and the inference time is 11.789 ms. The best performance is achieved on the GazeCapture dataset, with an area under the curve value of 0.944, specificity of 0.910, mean absolute error of 0.056, and inference time of 10.756 ms. The research findings denote that the raised model has significant merits in raising the accuracy and response speed of eye control interaction, providing new possibilities for the application of eye control technology in fields such as virtual reality, education and training, and health monitoring.Article highlightsA lightweight, adaptive eye-tracking model, ALETT, is introduced for improved human-computer interaction.ALETT demonstrates high accuracy and low inference time across multiple datasets, including EYEDIAP and GazeCapture.The proposed method has advantages in improving the accuracy and response speed of eye control interaction. 
653 |a Accuracy 
653 |a Usability 
653 |a User experience 
653 |a Electrodes 
653 |a Success 
653 |a Artificial neural networks 
653 |a Computer applications 
653 |a Eye movements 
653 |a Tracking 
653 |a Performance evaluation 
653 |a Virtual reality 
653 |a Deep learning 
653 |a Efficiency 
653 |a Machine learning 
653 |a Datasets 
653 |a Research methodology 
653 |a User needs 
653 |a Artificial intelligence 
653 |a Human-computer interface 
653 |a Computer vision 
653 |a Neural networks 
653 |a Inference 
653 |a Recurrent neural networks 
653 |a Errors 
653 |a Algorithms 
653 |a Real time 
653 |a Interfaces 
653 |a Environmental 
773 0 |t SN Applied Sciences  |g vol. 7, no. 11 (Nov 2025), p. 1346 
786 0 |d ProQuest  |t Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3269829965/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3269829965/fulltext/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3269829965/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch