SyncFlow: Toward Temporally Aligned Joint Audio-Video Generation from Text

Enregistré dans:
Détails bibliographiques
Publié dans:arXiv.org (Dec 3, 2024), p. n/a
Auteur principal: Liu, Haohe
Autres auteurs: Gael Le Lan, Mei, Xinhao, Ni, Zhaoheng, Kumar, Anurag, Nagaraja, Varun, Wang, Wenwu, Plumbley, Mark D, Shi, Yangyang, Chandra, Vikas
Publié:
Cornell University Library, arXiv.org
Sujets:
Accès en ligne:Citation/Abstract
Full text outside of ProQuest
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:Video and audio are closely correlated modalities that humans naturally perceive together. While recent advancements have enabled the generation of audio or video from text, producing both modalities simultaneously still typically relies on either a cascaded process or multi-modal contrastive encoders. These approaches, however, often lead to suboptimal results due to inherent information losses during inference and conditioning. In this paper, we introduce SyncFlow, a system that is capable of simultaneously generating temporally synchronized audio and video from text. The core of SyncFlow is the proposed dual-diffusion-transformer (d-DiT) architecture, which enables joint video and audio modelling with proper information fusion. To efficiently manage the computational cost of joint audio and video modelling, SyncFlow utilizes a multi-stage training strategy that separates video and audio learning before joint fine-tuning. Our empirical evaluations demonstrate that SyncFlow produces audio and video outputs that are more correlated than baseline methods with significantly enhanced audio quality and audio-visual correspondence. Moreover, we demonstrate strong zero-shot capabilities of SyncFlow, including zero-shot video-to-audio generation and adaptation to novel video resolutions without further training.
ISSN:2331-8422
Source:Engineering Database