Altogether: Image Captioning via Re-aligning Alt-text

Spremljeno u:
Bibliografski detalji
Izdano u:arXiv.org (Dec 12, 2024), p. n/a
Glavni autor: Hu, Xu
Daljnji autori: Po-Yao, Huang, Tan, Xiaoqing Ellen, Ching-Feng Yeh, Kahn, Jacob, Jou, Christine, Ghosh, Gargi, Levy, Omer, Zettlemoyer, Luke, Wen-tau Yih, Shang-Wen, Li, Xie, Saining, Feichtenhofer, Christoph
Izdano:
Cornell University Library, arXiv.org
Teme:
Online pristup:Citation/Abstract
Full text outside of ProQuest
Oznake: Dodaj oznaku
Bez oznaka, Budi prvi tko označuje ovaj zapis!
Opis
Sažetak:This paper focuses on creating synthetic data to improve the quality of image captions. Existing works typically have two shortcomings. First, they caption images from scratch, ignoring existing alt-text metadata, and second, lack transparency if the captioners' training data (e.g. GPT) is unknown. In this paper, we study a principled approach Altogether based on the key idea to edit and re-align existing alt-texts associated with the images. To generate training data, we perform human annotation where annotators start with the existing alt-text and re-align it to the image content in multiple rounds, consequently constructing captions with rich visual concepts. This differs from prior work that carries out human annotation as a one-time description task solely based on images and annotator knowledge. We train a captioner on this data that generalizes the process of re-aligning alt-texts at scale. Our results show our Altogether approach leads to richer image captions that also improve text-to-image generation and zero-shot image classification tasks.
ISSN:2331-8422
Izvor:Engineering Database