In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation

সংরক্ষণ করুন:
গ্রন্থ-পঞ্জীর বিবরন
প্রকাশিত:arXiv.org (Aug 9, 2024), p. n/a
প্রধান লেখক: Kang, Dahyun
অন্যান্য লেখক: Cho, Minsu
প্রকাশিত:
Cornell University Library, arXiv.org
বিষয়গুলি:
অনলাইন ব্যবহার করুন:Citation/Abstract
Full text outside of ProQuest
ট্যাগগুলো: ট্যাগ যুক্ত করুন
কোনো ট্যাগ নেই, প্রথমজন হিসাবে ট্যাগ করুন!
বিবরন
সার সংক্ষেপ:We present lazy visual grounding, a two-stage approach of unsupervised object mask discovery followed by object grounding, for open-vocabulary semantic segmentation. Plenty of the previous art casts this task as pixel-to-text classification without object-level comprehension, leveraging the image-to-text classification capability of pretrained vision-and-language models. We argue that visual objects are distinguishable without the prior text information as segmentation is essentially a vision task. Lazy visual grounding first discovers object masks covering an image with iterative Normalized cuts and then later assigns text on the discovered objects in a late interaction manner. Our model requires no additional training yet shows great performance on five public datasets: Pascal VOC, Pascal Context, COCO-object, COCO-stuff, and ADE 20K. Especially, the visually appealing segmentation results demonstrate the model capability to localize objects precisely. Paper homepage: https://cvlab.postech.ac.kr/research/lazygrounding
আইএসএসএন:2331-8422
সম্পদ:Engineering Database