In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation

I tiakina i:
Ngā taipitopito rārangi puna kōrero
I whakaputaina i:arXiv.org (Aug 9, 2024), p. n/a
Kaituhi matua: Kang, Dahyun
Ētahi atu kaituhi: Cho, Minsu
I whakaputaina:
Cornell University Library, arXiv.org
Ngā marau:
Urunga tuihono:Citation/Abstract
Full text outside of ProQuest
Ngā Tūtohu: Tāpirihia he Tūtohu
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!
Whakaahuatanga
Whakarāpopotonga:We present lazy visual grounding, a two-stage approach of unsupervised object mask discovery followed by object grounding, for open-vocabulary semantic segmentation. Plenty of the previous art casts this task as pixel-to-text classification without object-level comprehension, leveraging the image-to-text classification capability of pretrained vision-and-language models. We argue that visual objects are distinguishable without the prior text information as segmentation is essentially a vision task. Lazy visual grounding first discovers object masks covering an image with iterative Normalized cuts and then later assigns text on the discovered objects in a late interaction manner. Our model requires no additional training yet shows great performance on five public datasets: Pascal VOC, Pascal Context, COCO-object, COCO-stuff, and ADE 20K. Especially, the visually appealing segmentation results demonstrate the model capability to localize objects precisely. Paper homepage: https://cvlab.postech.ac.kr/research/lazygrounding
ISSN:2331-8422
Puna:Engineering Database