Versatile Video Coding-Post Processing Feature Fusion: A Post-Processing Convolutional Neural Network with Progressive Feature Fusion for Efficient Video Enhancement

Guardat en:
Dades bibliogràfiques
Publicat a:Applied Sciences vol. 14, no. 18 (2024), p. 8276
Autor principal: Das, Tanni
Altres autors: Liang, Xilong, Choi, Kiho
Publicat:
MDPI AG
Matèries:
Accés en línia:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3110325251
003 UK-CbPIL
022 |a 2076-3417 
024 7 |a 10.3390/app14188276  |2 doi 
035 |a 3110325251 
045 2 |b d20240101  |b d20241231 
084 |a 231338  |2 nlm 
100 1 |a Das, Tanni  |u Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea; <email>tannidascu@gmail.com</email> (T.D.); <email>xilongliang97@gmail.com</email> (X.L.) 
245 1 |a Versatile Video Coding-Post Processing Feature Fusion: A Post-Processing Convolutional Neural Network with Progressive Feature Fusion for Efficient Video Enhancement 
260 |b MDPI AG  |c 2024 
513 |a Journal Article 
520 3 |a Advanced video codecs such as High Efficiency Video Coding/H.265 (HEVC) and Versatile Video Coding/H.266 (VVC) are vital for streaming high-quality online video content, as they compress and transmit data efficiently. However, these codecs can occasionally degrade video quality by adding undesirable artifacts such as blockiness, blurriness, and ringing, which can detract from the viewer’s experience. To ensure a seamless and engaging video experience, it is essential to remove these artifacts, which improves viewer comfort and engagement. In this paper, we propose a deep feature fusion based convolutional neural network (CNN) architecture (VVC-PPFF) for post-processing approach to further enhance the performance of VVC. The proposed network, VVC-PPFF, harnesses the power of CNNs to enhance decoded frames, significantly improving the coding efficiency of the state-of-the-art VVC video coding standard. By combining deep features from early and later convolution layers, the network learns to extract both low-level and high-level features, resulting in more generalized outputs that adapt to different quantization parameter (QP) values. The proposed VVC-PPFF network achieves outstanding performance, with Bjøntegaard Delta Rate (BD-Rate) improvements of 5.81% and 6.98% for luma components in random access (RA) and low-delay (LD) configurations, respectively, while also boosting peak signal-to-noise ratio (PSNR). 
653 |a Innovations 
653 |a Video compression 
653 |a Deep learning 
653 |a Algorithms 
653 |a Streaming media 
653 |a Bandwidths 
653 |a Neural networks 
653 |a Efficiency 
700 1 |a Liang, Xilong  |u Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea; <email>tannidascu@gmail.com</email> (T.D.); <email>xilongliang97@gmail.com</email> (X.L.) 
700 1 |a Choi, Kiho  |u Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea; <email>tannidascu@gmail.com</email> (T.D.); <email>xilongliang97@gmail.com</email> (X.L.); Department of Electronic Engineering, Kyung Hee University, Yongin 17104, Republic of Korea 
773 0 |t Applied Sciences  |g vol. 14, no. 18 (2024), p. 8276 
786 0 |d ProQuest  |t Publicly Available Content Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3110325251/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3110325251/fulltextwithgraphics/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3110325251/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch