SyncFlow: Toward Temporally Aligned Joint Audio-Video Generation from Text

Guardado en:
书目详细资料
发表在:arXiv.org (Dec 3, 2024), p. n/a
主要作者: Liu, Haohe
其他作者: Gael Le Lan, Mei, Xinhao, Ni, Zhaoheng, Kumar, Anurag, Nagaraja, Varun, Wang, Wenwu, Plumbley, Mark D, Shi, Yangyang, Chandra, Vikas
出版:
Cornell University Library, arXiv.org
主题:
在线阅读:Citation/Abstract
Full text outside of ProQuest
标签: 添加标签
没有标签, 成为第一个标记此记录!
实物特征
摘要:Video and audio are closely correlated modalities that humans naturally perceive together. While recent advancements have enabled the generation of audio or video from text, producing both modalities simultaneously still typically relies on either a cascaded process or multi-modal contrastive encoders. These approaches, however, often lead to suboptimal results due to inherent information losses during inference and conditioning. In this paper, we introduce SyncFlow, a system that is capable of simultaneously generating temporally synchronized audio and video from text. The core of SyncFlow is the proposed dual-diffusion-transformer (d-DiT) architecture, which enables joint video and audio modelling with proper information fusion. To efficiently manage the computational cost of joint audio and video modelling, SyncFlow utilizes a multi-stage training strategy that separates video and audio learning before joint fine-tuning. Our empirical evaluations demonstrate that SyncFlow produces audio and video outputs that are more correlated than baseline methods with significantly enhanced audio quality and audio-visual correspondence. Moreover, we demonstrate strong zero-shot capabilities of SyncFlow, including zero-shot video-to-audio generation and adaptation to novel video resolutions without further training.
ISSN:2331-8422
Fuente:Engineering Database