ESA-YOLO: An efficient scale-aware traffic sign detection algorithm based on YOLOv11 under adverse weather conditions

Guardado en:
Detalles Bibliográficos
Publicado en:PLoS One vol. 20, no. 11 (Nov 2025), p. e0336863
Autor principal: Li, ChenHao
Otros Autores: Liu, ShuXian, Peng, ZiNuo
Publicado:
Public Library of Science
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Traffic sign detection is a critical component of autonomous driving and advanced driver assistance systems, yet challenges persist in achieving high accuracy while maintaining efficiency, particularly for multi-scale and small objects in complex scenes. This paper proposes an improved YOLOv11-based traffic sign detection algorithm that tackles above challenges through three key innovations: (1) A Dense Multi-path Feature Pyramid Network (DMFPN) that boosts multi-scale feature fusion by enabling comprehensive bidirectional interaction between high-level semantic and low-level spatial information, augmented by a dynamic weighted fusion mechanism. (2) A Context-Aware Gating Block (CAGB) that efficiently integrates local and global contextual information through lightweight token and channel mixer, enhancing the small-object detection ability without excessive computational overhead. (3) An Adaptive Scene Perception Head (ASPH) that synergistically combines multi-scale feature extraction with attention mechanisms to improve robustness in adverse weather condition. Extensive experiments on the TT100K and CCTSDB2021 datasets demonstrate the model’s superior performance. On the TT100K dataset, our model outperforms the state-of-the-art YOLOv11n model, achieving improvements of 3.8% in mAP@50 and 3.9% in mAP@50-95 while maintaining comparable computational complexity and reducing parameters by 20%. Similar gains are observed on the CCTSDB2021 dataset, with enhancements of 2.3% in mAP@50 and 1.8% in mAP@50-95. Furthermore, experimental results also demonstrate that our proposed model exhibits superior performance in small object detection and robustness in complex environments compared to mainstream competitors.
ISSN:1932-6203
DOI:10.1371/journal.pone.0336863
Fuente:Health & Medical Collection