Enhancing Underwater Video from Consecutive Frames While Preserving Temporal Consistency

Guardado en:
Detalles Bibliográficos
Publicado en:Journal of Marine Science and Engineering vol. 13, no. 1 (2025), p. 127
Autor principal: Hu, Kai
Otros Autores: Meng, Yuancheng, Liao, Zichen, Tang, Lei, Ye, Xiaoling
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3159529997
003 UK-CbPIL
022 |a 2077-1312 
024 7 |a 10.3390/jmse13010127  |2 doi 
035 |a 3159529997 
045 2 |b d20250101  |b d20251231 
084 |a 231479  |2 nlm 
100 1 |a Hu, Kai  |u School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China; <email>202212490639@nuist.edu.cn</email> (Y.M.); <email>fj808642@student.reading.ac.uk</email> (Z.L.); <email>xyz.nim@163.com</email> (X.Y.); Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing 210044, China 
245 1 |a Enhancing Underwater Video from Consecutive Frames While Preserving Temporal Consistency 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Current methods for underwater image enhancement primarily focus on single-frame processing. While these approaches achieve impressive results for static images, they often fail to maintain temporal coherence across frames in underwater videos, which leads to temporal artifacts and frame flickering. Furthermore, existing enhancement methods struggle to accurately capture features in underwater scenes. This makes it difficult to handle challenges such as uneven lighting and edge blurring in complex underwater environments. To address these issues, this paper presents a dual-branch underwater video enhancement network. The network synthesizes short-range video sequences by learning and inferring optical flow from individual frames. It effectively enhances temporal consistency across video frames through predicted optical flow information, thereby mitigating temporal instability within frame sequences. In addition, to address the limitations of traditional U-Net models in handling complex multiscale feature fusion, this study proposes a novel underwater feature fusion module. By applying both max pooling and average pooling, this module separately extracts local and global features. It utilizes an attention mechanism to adaptively adjust the weights of different regions in the feature map, thereby effectively enhancing key regions within underwater video frames. Experimental results indicate that when compared with the existing underwater image enhancement baseline method and the consistency enhancement baseline method, the proposed model improves the consistency index by 30% and shows a marginal decrease of only 0.6% in enhancement quality index, demonstrating its superiority in underwater video enhancement tasks. 
653 |a Consistency 
653 |a Deep learning 
653 |a Frames (data processing) 
653 |a Image enhancement 
653 |a Lighting 
653 |a Video recordings 
653 |a Optical flow (image analysis) 
653 |a Task complexity 
653 |a Feature maps 
653 |a Methods 
653 |a Information processing 
653 |a Modules 
653 |a Image quality 
653 |a Research & development--R&D 
653 |a Underwater 
653 |a Environmental 
700 1 |a Meng, Yuancheng  |u School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China; <email>202212490639@nuist.edu.cn</email> (Y.M.); <email>fj808642@student.reading.ac.uk</email> (Z.L.); <email>xyz.nim@163.com</email> (X.Y.) 
700 1 |a Liao, Zichen  |u School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China; <email>202212490639@nuist.edu.cn</email> (Y.M.); <email>fj808642@student.reading.ac.uk</email> (Z.L.); <email>xyz.nim@163.com</email> (X.Y.); University of Reading, Whiteknights, P.O. Box 217, Reading, Berkshire RG6 6AH, UK 
700 1 |a Tang, Lei  |u Information and Telecommunication Branch, State Grid Jiangsu Electric Power Company, Nanjing 211125, China; <email>tanglei@js.sgcc.com.cn</email> 
700 1 |a Ye, Xiaoling  |u School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China; <email>202212490639@nuist.edu.cn</email> (Y.M.); <email>fj808642@student.reading.ac.uk</email> (Z.L.); <email>xyz.nim@163.com</email> (X.Y.); Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing 210044, China 
773 0 |t Journal of Marine Science and Engineering  |g vol. 13, no. 1 (2025), p. 127 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3159529997/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3159529997/fulltextwithgraphics/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3159529997/fulltextPDF/embedded/H09TXR3UUZB2ISDL?source=fedsrch