Neural Network-Based Atlas Enhancement in MPEG Immersive Video

Guardado en:
Bibliografiske detaljer
Udgivet i:Mathematics vol. 13, no. 19 (2025), p. 3110-3127
Hovedforfatter: Lee, Taesik
Andre forfattere: Kugjin, Yun, Won-Sik, Cheong, Dongsan, Jun
Udgivet:
MDPI AG
Fag:
Online adgang:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!
Beskrivelse
Resumen:Recently, the demand for immersive videos has surged with the expansion of virtual reality, augmented reality, and metaverse technologies. As an international standard, moving picture experts group (MPEG) has developed MPEG immersive video (MIV) to efficiently transmit large-volume immersive videos. The MIV encoder generates atlas videos to convert extensive multi-view videos into low-bitrate formats. When these atlas videos are compressed using conventional video codecs, compression artifacts often appear in the reconstructed atlas videos. To address this issue, this study proposes a feature-extraction-based convolutional neural network (FECNN) to reduce the compression artifacts during MIV atlas video transmission. The proposed FECNN uses quantization parameter (QP) maps and depth information as inputs and consists of shallow feature extraction (SFE) blocks and deep feature extraction (DFE) blocks to utilize layered feature characteristics. Compared to the existing MIV, the proposed method improves the Bjontegaard delta bit-rate (BDBR) by −4.12% and −6.96% in the basic and additional views, respectively.
ISSN:2227-7390
DOI:10.3390/math13193110
Fuente:Engineering Database