Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language Models

Furkejuvvon:
Bibliográfalaš dieđut
Publikašuvnnas:arXiv.org (Jun 15, 2024), p. n/a
Váldodahkki: Luo, Jiayun
Eará dahkkit: Khandelwal, Siddhesh, Sigal, Leonid, Li, Boyang
Almmustuhtton:
Cornell University Library, arXiv.org
Fáttát:
Liŋkkat:Citation/Abstract
Full text outside of ProQuest
Fáddágilkorat: Lasit fáddágilkoriid
Eai fáddágilkorat, Lasit vuosttaš fáddágilkora!
Govvádus
Abstrákta:From image-text pairs, large-scale vision-language models (VLMs) learn to implicitly associate image regions with words, which prove effective for tasks like visual question answering. However, leveraging the learned association for open-vocabulary semantic segmentation remains a challenge. In this paper, we propose a simple, yet extremely effective, training-free technique, Plug-and-Play Open-Vocabulary Semantic Segmentation (PnP-OVSS) for this task. PnP-OVSS leverages a VLM with direct text-to-image cross-attention and an image-text matching loss. To balance between over-segmentation and under-segmentation, we introduce Salience Dropout; by iteratively dropping patches that the model is most attentive to, we are able to better resolve the entire extent of the segmentation mask. PnP-OVSS does not require any neural network training and performs hyperparameter tuning without the need for any segmentation annotations, even for a validation set. PnP-OVSS demonstrates substantial improvements over comparable baselines (+26.2% mIoU on Pascal VOC, +20.5% mIoU on MS COCO, +3.1% mIoU on COCO Stuff and +3.0% mIoU on ADE20K). Our codebase is at https://github.com/letitiabanana/PnP-OVSS.
ISSN:2331-8422
Gáldu:Engineering Database