Remote Sensing Imagery for Multi-Stage Vehicle Detection and Classification via YOLOv9 and Deep Learner

Guardat en:
Dades bibliogràfiques
Publicat a:Computers, Materials, & Continua vol. 84, no. 3 (2025), p. 4491-4510
Autor principal: Mudawi, Naif
Altres autors: Hanzla, Muhammad, Alazeb, Abdulwahab, Alshehri, Mohammed, Alhasson, Haifa, Alhammadi, Dina, Ahmad, Jalal
Publicat:
Tech Science Press
Matèries:
Accés en línia:Citation/Abstract
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3238361607
003 UK-CbPIL
022 |a 1546-2218 
022 |a 1546-2226 
024 7 |a 10.32604/cmc.2025.065490  |2 doi 
035 |a 3238361607 
045 2 |b d20250101  |b d20251231 
100 1 |a Mudawi, Naif 
245 1 |a Remote Sensing Imagery for Multi-Stage Vehicle Detection and Classification via YOLOv9 and Deep Learner 
260 |b Tech Science Press  |c 2025 
513 |a Journal Article 
520 3 |a Unmanned Aerial Vehicles (UAVs) are increasingly employed in traffic surveillance, urban planning, and infrastructure monitoring due to their cost-effectiveness, flexibility, and high-resolution imaging. However, vehicle detection and classification in aerial imagery remain challenging due to scale variations from fluctuating UAV altitudes, frequent occlusions in dense traffic, and environmental noise, such as shadows and lighting inconsistencies. Traditional methods, including sliding-window searches and shallow learning techniques, struggle with computational inefficiency and robustness under dynamic conditions. To address these limitations, this study proposes a six-stage hierarchical framework integrating radiometric calibration, deep learning, and classical feature engineering. The workflow begins with radiometric calibration to normalize pixel intensities and mitigate sensor noise, followed by Conditional Random Field (CRF) segmentation to isolate vehicles. YOLOv9, equipped with a bi-directional feature pyramid network (BiFPN), ensures precise multi-scale object detection. Hybrid feature extraction employs Maximally Stable Extremal Regions (MSER) for stable contour detection, Binary Robust Independent Elementary Features (BRIEF) for texture encoding, and Affine-SIFT (ASIFT) for viewpoint invariance. Quadratic Discriminant Analysis (QDA) enhances feature discrimination, while a Probabilistic Neural Network (PNN) performs Bayesian probability-based classification. Tested on the Roundabout Aerial Imagery (15,474 images, 985K instances) and AU-AIR (32,823 instances, 7 classes) datasets, the model achieves state-of-the-art accuracy of 95.54% and 94.14%, respectively. Its superior performance in detecting small-scale vehicles and resolving occlusions highlights its potential for intelligent traffic systems. Future work will extend testing to nighttime and adverse weather conditions while optimizing real-time UAV inference. 
653 |a Traffic surveillance 
653 |a Feature extraction 
653 |a Background noise 
653 |a Classification 
653 |a Image resolution 
653 |a Neural networks 
653 |a Conditional random fields 
653 |a Unmanned aerial vehicles 
653 |a Remote sensing 
653 |a Aerial photography 
653 |a Calibration 
653 |a Urban planning 
653 |a Weather 
653 |a Discriminant analysis 
653 |a Object recognition 
653 |a Deep learning 
653 |a Machine learning 
653 |a Real time 
653 |a Statistical analysis 
653 |a Cost effectiveness 
653 |a Vehicles 
700 1 |a Hanzla, Muhammad 
700 1 |a Alazeb, Abdulwahab 
700 1 |a Alshehri, Mohammed 
700 1 |a Alhasson, Haifa 
700 1 |a Alhammadi, Dina 
700 1 |a Ahmad, Jalal 
773 0 |t Computers, Materials, & Continua  |g vol. 84, no. 3 (2025), p. 4491-4510 
786 0 |d ProQuest  |t Publicly Available Content Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3238361607/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3238361607/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch