Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language Models

Guardado en:
書目詳細資料
發表在:arXiv.org (Jun 15, 2024), p. n/a
主要作者: Luo, Jiayun
其他作者: Khandelwal, Siddhesh, Sigal, Leonid, Li, Boyang
出版:
Cornell University Library, arXiv.org
主題:
在線閱讀:Citation/Abstract
Full text outside of ProQuest
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
Resumen:From image-text pairs, large-scale vision-language models (VLMs) learn to implicitly associate image regions with words, which prove effective for tasks like visual question answering. However, leveraging the learned association for open-vocabulary semantic segmentation remains a challenge. In this paper, we propose a simple, yet extremely effective, training-free technique, Plug-and-Play Open-Vocabulary Semantic Segmentation (PnP-OVSS) for this task. PnP-OVSS leverages a VLM with direct text-to-image cross-attention and an image-text matching loss. To balance between over-segmentation and under-segmentation, we introduce Salience Dropout; by iteratively dropping patches that the model is most attentive to, we are able to better resolve the entire extent of the segmentation mask. PnP-OVSS does not require any neural network training and performs hyperparameter tuning without the need for any segmentation annotations, even for a validation set. PnP-OVSS demonstrates substantial improvements over comparable baselines (+26.2% mIoU on Pascal VOC, +20.5% mIoU on MS COCO, +3.1% mIoU on COCO Stuff and +3.0% mIoU on ADE20K). Our codebase is at https://github.com/letitiabanana/PnP-OVSS.
ISSN:2331-8422
Fuente:Engineering Database