Remote Sensing Imagery for Multi-Stage Vehicle Detection and Classification via YOLOv9 and Deep Learner

Guardado en:
Detalles Bibliográficos
Publicado en:Computers, Materials, & Continua vol. 84, no. 3 (2025), p. 4491-4510
Autor principal: Mudawi, Naif
Otros Autores: Hanzla, Muhammad, Alazeb, Abdulwahab, Alshehri, Mohammed, Alhasson, Haifa, Alhammadi, Dina, Ahmad, Jalal
Publicado:
Tech Science Press
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Unmanned Aerial Vehicles (UAVs) are increasingly employed in traffic surveillance, urban planning, and infrastructure monitoring due to their cost-effectiveness, flexibility, and high-resolution imaging. However, vehicle detection and classification in aerial imagery remain challenging due to scale variations from fluctuating UAV altitudes, frequent occlusions in dense traffic, and environmental noise, such as shadows and lighting inconsistencies. Traditional methods, including sliding-window searches and shallow learning techniques, struggle with computational inefficiency and robustness under dynamic conditions. To address these limitations, this study proposes a six-stage hierarchical framework integrating radiometric calibration, deep learning, and classical feature engineering. The workflow begins with radiometric calibration to normalize pixel intensities and mitigate sensor noise, followed by Conditional Random Field (CRF) segmentation to isolate vehicles. YOLOv9, equipped with a bi-directional feature pyramid network (BiFPN), ensures precise multi-scale object detection. Hybrid feature extraction employs Maximally Stable Extremal Regions (MSER) for stable contour detection, Binary Robust Independent Elementary Features (BRIEF) for texture encoding, and Affine-SIFT (ASIFT) for viewpoint invariance. Quadratic Discriminant Analysis (QDA) enhances feature discrimination, while a Probabilistic Neural Network (PNN) performs Bayesian probability-based classification. Tested on the Roundabout Aerial Imagery (15,474 images, 985K instances) and AU-AIR (32,823 instances, 7 classes) datasets, the model achieves state-of-the-art accuracy of 95.54% and 94.14%, respectively. Its superior performance in detecting small-scale vehicles and resolving occlusions highlights its potential for intelligent traffic systems. Future work will extend testing to nighttime and adverse weather conditions while optimizing real-time UAV inference.
ISSN:1546-2218
1546-2226
DOI:10.32604/cmc.2025.065490
Fuente:Publicly Available Content Database