Rate-Adaptive Generative Semantic Communication Using Conditional Diffusion Models
Збережено в:
| Опубліковано в:: | arXiv.org (Dec 23, 2024), p. n/a |
|---|---|
| Автор: | |
| Інші автори: | , |
| Опубліковано: |
Cornell University Library, arXiv.org
|
| Предмети: | |
| Онлайн доступ: | Citation/Abstract Full text outside of ProQuest |
| Теги: |
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
| Короткий огляд: | Recent advances in deep learning-based joint source-channel coding (DJSCC) have shown promise for end-to-end semantic image transmission. However, most existing schemes primarily focus on optimizing pixel-wise metrics, which often fail to align with human perception, leading to lower perceptual quality. In this letter, we propose a novel generative DJSCC approach using conditional diffusion models to enhance the perceptual quality of transmitted images. Specifically, by utilizing entropy models, we effectively manage transmission bandwidth based on the estimated entropy of transmitted sym-bols. These symbols are then used at the receiver as conditional information to guide a conditional diffusion decoder in image reconstruction. Our model is built upon the emerging advanced mamba-like linear attention (MLLA) skeleton, which excels in image processing tasks while also offering fast inference speed. Besides, we introduce a multi-stage training strategy to ensure the stability and improve the overall performance of the model. Simulation results demonstrate that our proposed method significantly outperforms existing approaches in terms of perceptual quality. |
|---|---|
| ISSN: | 2331-8422 |
| Джерело: | Engineering Database |