SyncFlow: Toward Temporally Aligned Joint Audio-Video Generation from Text

Sábháilte in:
Sonraí bibleagrafaíochta
Foilsithe in:arXiv.org (Dec 3, 2024), p. n/a
Príomhchruthaitheoir: Liu, Haohe
Rannpháirtithe: Gael Le Lan, Mei, Xinhao, Ni, Zhaoheng, Kumar, Anurag, Nagaraja, Varun, Wang, Wenwu, Plumbley, Mark D, Shi, Yangyang, Chandra, Vikas
Foilsithe / Cruthaithe:
Cornell University Library, arXiv.org
Ábhair:
Rochtain ar líne:Citation/Abstract
Full text outside of ProQuest
Clibeanna: Cuir clib leis
Níl clibeanna ann, Bí ar an gcéad duine le clib a chur leis an taifead seo!
Cur síos
Achoimre:Video and audio are closely correlated modalities that humans naturally perceive together. While recent advancements have enabled the generation of audio or video from text, producing both modalities simultaneously still typically relies on either a cascaded process or multi-modal contrastive encoders. These approaches, however, often lead to suboptimal results due to inherent information losses during inference and conditioning. In this paper, we introduce SyncFlow, a system that is capable of simultaneously generating temporally synchronized audio and video from text. The core of SyncFlow is the proposed dual-diffusion-transformer (d-DiT) architecture, which enables joint video and audio modelling with proper information fusion. To efficiently manage the computational cost of joint audio and video modelling, SyncFlow utilizes a multi-stage training strategy that separates video and audio learning before joint fine-tuning. Our empirical evaluations demonstrate that SyncFlow produces audio and video outputs that are more correlated than baseline methods with significantly enhanced audio quality and audio-visual correspondence. Moreover, we demonstrate strong zero-shot capabilities of SyncFlow, including zero-shot video-to-audio generation and adaptation to novel video resolutions without further training.
ISSN:2331-8422
Foinse:Engineering Database