In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation

-д хадгалсан:
Номзүйн дэлгэрэнгүй
-д хэвлэсэн:arXiv.org (Aug 9, 2024), p. n/a
Үндсэн зохиолч: Kang, Dahyun
Бусад зохиолчид: Cho, Minsu
Хэвлэсэн:
Cornell University Library, arXiv.org
Нөхцлүүд:
Онлайн хандалт:Citation/Abstract
Full text outside of ProQuest
Шошгууд: Шошго нэмэх
Шошго байхгүй, Энэхүү баримтыг шошголох эхний хүн болох!
Тодорхойлолт
Хураангуй:We present lazy visual grounding, a two-stage approach of unsupervised object mask discovery followed by object grounding, for open-vocabulary semantic segmentation. Plenty of the previous art casts this task as pixel-to-text classification without object-level comprehension, leveraging the image-to-text classification capability of pretrained vision-and-language models. We argue that visual objects are distinguishable without the prior text information as segmentation is essentially a vision task. Lazy visual grounding first discovers object masks covering an image with iterative Normalized cuts and then later assigns text on the discovered objects in a late interaction manner. Our model requires no additional training yet shows great performance on five public datasets: Pascal VOC, Pascal Context, COCO-object, COCO-stuff, and ADE 20K. Especially, the visually appealing segmentation results demonstrate the model capability to localize objects precisely. Paper homepage: https://cvlab.postech.ac.kr/research/lazygrounding
ISSN:2331-8422
Эх сурвалж:Engineering Database