In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation
保存先:
| 出版年: | arXiv.org (Aug 9, 2024), p. n/a |
|---|---|
| 第一著者: | |
| その他の著者: | |
| 出版事項: |
Cornell University Library, arXiv.org
|
| 主題: | |
| オンライン・アクセス: | Citation/Abstract Full text outside of ProQuest |
| タグ: |
タグなし, このレコードへの初めてのタグを付けませんか!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3092074602 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 2331-8422 | ||
| 035 | |a 3092074602 | ||
| 045 | 0 | |b d20240809 | |
| 100 | 1 | |a Kang, Dahyun | |
| 245 | 1 | |a In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation | |
| 260 | |b Cornell University Library, arXiv.org |c Aug 9, 2024 | ||
| 513 | |a Working Paper | ||
| 520 | 3 | |a We present lazy visual grounding, a two-stage approach of unsupervised object mask discovery followed by object grounding, for open-vocabulary semantic segmentation. Plenty of the previous art casts this task as pixel-to-text classification without object-level comprehension, leveraging the image-to-text classification capability of pretrained vision-and-language models. We argue that visual objects are distinguishable without the prior text information as segmentation is essentially a vision task. Lazy visual grounding first discovers object masks covering an image with iterative Normalized cuts and then later assigns text on the discovered objects in a late interaction manner. Our model requires no additional training yet shows great performance on five public datasets: Pascal VOC, Pascal Context, COCO-object, COCO-stuff, and ADE 20K. Especially, the visually appealing segmentation results demonstrate the model capability to localize objects precisely. Paper homepage: https://cvlab.postech.ac.kr/research/lazygrounding | |
| 653 | |a Visual tasks | ||
| 653 | |a Vision | ||
| 653 | |a Semantic segmentation | ||
| 653 | |a Classification | ||
| 653 | |a Image segmentation | ||
| 653 | |a Pascal (programming language) | ||
| 700 | 1 | |a Cho, Minsu | |
| 773 | 0 | |t arXiv.org |g (Aug 9, 2024), p. n/a | |
| 786 | 0 | |d ProQuest |t Engineering Database | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3092074602/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch |
| 856 | 4 | 0 | |3 Full text outside of ProQuest |u http://arxiv.org/abs/2408.04961 |