Text-Guided Object Detection Accuracy Enhancement Method Based on Improved YOLO-World

Salvato in:
Dettagli Bibliografici
Pubblicato in:Electronics vol. 14, no. 1 (2025), p. 133
Autore principale: Ding, Qian
Altri autori: Zhang, Enzheng, Liu, Zhiguo, Yao, Xinhai, Pan, Gaofeng
Pubblicazione:
MDPI AG
Soggetti:
Accesso online:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!
Descrizione
Abstract:In intelligent human–robot interaction scenarios, rapidly and accurately searching and recognizing specific targets is essential for enhancing robot operation and navigation capabilities, as well as achieving effective human–robot collaboration. This paper proposes an improved YOLO-World method with an integrated attention mechanism for text-guided object detection, aiming to boost visual detection accuracy. The method incorporates SPD-Conv modules into the YOLOV8 backbone to enhance low-resolution image processing and feature representation for small and medium-sized targets. Additionally, EMA is introduced to improve the visual feature representation guided by the text, and spatial attention focuses the model on image areas related to the text, enhancing its perception of specific target regions described in the text. The improved YOLO-World method with attention mechanism is detailed in the paper. Comparative experiments with four advanced object detection algorithms on COCO and a custom dataset show that the proposed method not only significantly improves object detection accuracy but also exhibits good generalization capabilities in varying scenes. This research offers a reference for high-precision object detection and provides technical solutions for applications requiring accurate object detection, such as human–robot interaction and artificial intelligence robots.
ISSN:2079-9292
DOI:10.3390/electronics14010133
Fonte:Advanced Technologies & Aerospace Database