Neural Network-Based Atlas Enhancement in MPEG Immersive Video
Uloženo v:
| Vydáno v: | Mathematics vol. 13, no. 19 (2025), p. 3110-3127 |
|---|---|
| Hlavní autor: | |
| Další autoři: | , , |
| Vydáno: |
MDPI AG
|
| Témata: | |
| On-line přístup: | Citation/Abstract Full Text + Graphics Full Text - PDF |
| Tagy: |
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstrakt: | Recently, the demand for immersive videos has surged with the expansion of virtual reality, augmented reality, and metaverse technologies. As an international standard, moving picture experts group (MPEG) has developed MPEG immersive video (MIV) to efficiently transmit large-volume immersive videos. The MIV encoder generates atlas videos to convert extensive multi-view videos into low-bitrate formats. When these atlas videos are compressed using conventional video codecs, compression artifacts often appear in the reconstructed atlas videos. To address this issue, this study proposes a feature-extraction-based convolutional neural network (FECNN) to reduce the compression artifacts during MIV atlas video transmission. The proposed FECNN uses quantization parameter (QP) maps and depth information as inputs and consists of shallow feature extraction (SFE) blocks and deep feature extraction (DFE) blocks to utilize layered feature characteristics. Compared to the existing MIV, the proposed method improves the Bjontegaard delta bit-rate (BDBR) by −4.12% and −6.96% in the basic and additional views, respectively. |
|---|---|
| ISSN: | 2227-7390 |
| DOI: | 10.3390/math13193110 |
| Zdroj: | Engineering Database |