Altogether: Image Captioning via Re-aligning Alt-text
Salvato in:
| Pubblicato in: | arXiv.org (Dec 12, 2024), p. n/a |
|---|---|
| Autore principale: | |
| Altri autori: | , , , , , , , , , , , |
| Pubblicazione: |
Cornell University Library, arXiv.org
|
| Soggetti: | |
| Accesso online: | Citation/Abstract Full text outside of ProQuest |
| Tags: |
Nessun Tag, puoi essere il primo ad aggiungerne!!
|
| Abstract: | This paper focuses on creating synthetic data to improve the quality of image captions. Existing works typically have two shortcomings. First, they caption images from scratch, ignoring existing alt-text metadata, and second, lack transparency if the captioners' training data (e.g. GPT) is unknown. In this paper, we study a principled approach Altogether based on the key idea to edit and re-align existing alt-texts associated with the images. To generate training data, we perform human annotation where annotators start with the existing alt-text and re-align it to the image content in multiple rounds, consequently constructing captions with rich visual concepts. This differs from prior work that carries out human annotation as a one-time description task solely based on images and annotator knowledge. We train a captioner on this data that generalizes the process of re-aligning alt-texts at scale. Our results show our Altogether approach leads to richer image captions that also improve text-to-image generation and zero-shot image classification tasks. |
|---|---|
| ISSN: | 2331-8422 |
| Fonte: | Engineering Database |