Rate-Adaptive Generative Semantic Communication Using Conditional Diffusion Models

Shranjeno v:
Bibliografske podrobnosti
izdano v:arXiv.org (Dec 23, 2024), p. n/a
Glavni avtor: Yang, Pujing
Drugi avtorji: Zhang, Guangyi, Cai, Yunlong
Izdano:
Cornell University Library, arXiv.org
Teme:
Online dostop:Citation/Abstract
Full text outside of ProQuest
Oznake: Označite
Brez oznak, prvi označite!
Opis
Resumen:Recent advances in deep learning-based joint source-channel coding (DJSCC) have shown promise for end-to-end semantic image transmission. However, most existing schemes primarily focus on optimizing pixel-wise metrics, which often fail to align with human perception, leading to lower perceptual quality. In this letter, we propose a novel generative DJSCC approach using conditional diffusion models to enhance the perceptual quality of transmitted images. Specifically, by utilizing entropy models, we effectively manage transmission bandwidth based on the estimated entropy of transmitted sym-bols. These symbols are then used at the receiver as conditional information to guide a conditional diffusion decoder in image reconstruction. Our model is built upon the emerging advanced mamba-like linear attention (MLLA) skeleton, which excels in image processing tasks while also offering fast inference speed. Besides, we introduce a multi-stage training strategy to ensure the stability and improve the overall performance of the model. Simulation results demonstrate that our proposed method significantly outperforms existing approaches in terms of perceptual quality.
ISSN:2331-8422
Fuente:Engineering Database