Enhancing Temporal Understanding in Audio Question Answering for Large Audio Language Models

محفوظ في:
التفاصيل البيبلوغرافية
الحاوية / القاعدة:arXiv.org (Dec 13, 2024), p. n/a
المؤلف الرئيسي: Sridhar, Arvind Krishna
مؤلفون آخرون: Guo, Yinyi, Visser, Erik
منشور في:
Cornell University Library, arXiv.org
الموضوعات:
الوصول للمادة أونلاين:Citation/Abstract
Full text outside of ProQuest
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
مستخلص:The Audio Question Answering (AQA) task includes audio event classification, audio captioning, and open-ended reasoning. Recently, AQA has garnered attention due to the advent of Large Audio Language Models (LALMs). Current literature focuses on constructing LALMs by integrating audio encoders with text-only Large Language Models (LLMs) through a projection module. While LALMs excel in general audio understanding, they are limited in temporal reasoning, which may hinder their commercial applications and on-device deployment. This paper addresses these challenges and limitations in audio temporal reasoning. First, we introduce a data augmentation technique for generating reliable audio temporal questions and answers using an LLM. Second, we perform a further fine-tuning of an existing baseline using curriculum learning strategy to specialize in temporal reasoning without compromising performance on fine-tuned tasks. We demonstrate the performance of our model using state-of-the-art LALMs on public audio benchmark datasets. Third, we implement our AQA model on-device locally and investigate its CPU inference for edge applications.
تدمد:2331-8422
المصدر:Engineering Database