Machine vision based perception for vehicle-mounted UAV autonomous landing under GNSS-denied environments

Guardado en:
Bibliografiske detaljer
Udgivet i:Journal of King Saud University. Computer and Information Sciences vol. 37, no. 10 (Dec 2025), p. 334
Hovedforfatter: Ma, Pengbo
Andre forfattere: He, Chenyuan, Zhang, Zhouyu, Xv, Zhan, Wang, Hai, Cai, Yingfeng, Chen, Long, Zhong, Can, Zhang, Yiqun
Udgivet:
Springer Nature B.V.
Fag:
Online adgang:Citation/Abstract
Full Text
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3275907296
003 UK-CbPIL
022 |a 1319-1578 
024 7 |a 10.1007/s44443-025-00345-3  |2 doi 
035 |a 3275907296 
045 2 |b d20251201  |b d20251231 
100 1 |a Ma, Pengbo  |u Jiangsu University, School of Automotive and Traffic Engineering, Zhenjiang, China (GRID:grid.440785.a) (ISNI:0000 0001 0743 511X) 
245 1 |a Machine vision based perception for vehicle-mounted UAV autonomous landing under GNSS-denied environments 
260 |b Springer Nature B.V.  |c Dec 2025 
513 |a Journal Article 
520 3 |a With the growing demand for collaborative Unmanned Aerial Vehicle (UAV) and Unmanned Ground Vehicle (UGV) operations, precise landing of a vehicle-mounted UAV on a moving platform in complex environments has become a significant challenge, limiting the functionality of collaborative systems. This paper presents an autonomous landing perception scheme for a vehicle-mounted UAV, specifically designed for GNSS-denied environments to enhance landing capabilities. First, to address the challenges of insufficient illumination in airborne visual perception, an airborne infrared and visible image fusion method is employed to enhance image detail and contrast. Second, a feature enhancement network and region proposal network optimized for small object detection are explored to improve the detection of moving platforms during UAV landing. Finally, a relative pose and position estimation method based on the orthogonal iteration algorithm is investigated to reduce visual pose and position estimation errors and iteration time. Both simulation results and field tests demonstrate that the proposed algorithm performs robustly under low-light and foggy conditions, achieving accurate pose and position estimation even in scenarios with inadequate illumination. 
653 |a Field tests 
653 |a Landing 
653 |a Iterative algorithms 
653 |a Usability 
653 |a Deep learning 
653 |a Collaboration 
653 |a Machine vision 
653 |a Visual perception 
653 |a Unmanned aerial vehicles 
653 |a Neural networks 
653 |a Sensors 
653 |a Infrared imagery 
653 |a Unmanned ground vehicles 
653 |a Computer vision 
653 |a Algorithms 
653 |a Object recognition 
653 |a Light 
653 |a Illumination 
653 |a Border patrol 
653 |a Visual perception driven algorithms 
653 |a Vehicles 
700 1 |a He, Chenyuan  |u Jiangsu University, School of Automotive and Traffic Engineering, Zhenjiang, China (GRID:grid.440785.a) (ISNI:0000 0001 0743 511X); the State Key Laboratory of Autonomous Intelligent Unmanned Systems, Beijing, China (GRID:grid.440785.a); Control and Safety Key Laboratory of Sichuan Province, Vehicle Measurement, Chengdu, China (GRID:grid.440785.a) 
700 1 |a Zhang, Zhouyu  |u Jiangsu University, School of Automotive and Traffic Engineering, Zhenjiang, China (GRID:grid.440785.a) (ISNI:0000 0001 0743 511X) 
700 1 |a Xv, Zhan  |u Jiangsu University, School of Automotive and Traffic Engineering, Zhenjiang, China (GRID:grid.440785.a) (ISNI:0000 0001 0743 511X) 
700 1 |a Wang, Hai  |u Jiangsu University, School of Automotive and Traffic Engineering, Zhenjiang, China (GRID:grid.440785.a) (ISNI:0000 0001 0743 511X) 
700 1 |a Cai, Yingfeng  |u Jiangsu University, Automotive Engineering Research Institute, Zhenjiang, China (GRID:grid.440785.a) (ISNI:0000 0001 0743 511X) 
700 1 |a Chen, Long  |u Jiangsu University, Automotive Engineering Research Institute, Zhenjiang, China (GRID:grid.440785.a) (ISNI:0000 0001 0743 511X) 
700 1 |a Zhong, Can  |u Beijing Engineering Research Center of Aerial Intelligent Remote Sensing Equipments, Beijing, China (GRID:grid.440785.a) 
700 1 |a Zhang, Yiqun  |u TopXGun (Nanjing) Robotics Company Limited, Nanjing, China (GRID:grid.440785.a) 
773 0 |t Journal of King Saud University. Computer and Information Sciences  |g vol. 37, no. 10 (Dec 2025), p. 334 
786 0 |d ProQuest  |t Computer Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3275907296/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3275907296/fulltext/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3275907296/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch