Dual-CycleGANs with Dynamic Guidance for Robust Underwater Image Restoration

Guardado en:
Detalles Bibliográficos
Publicado en:Journal of Marine Science and Engineering vol. 13, no. 2 (2025), p. 231
Autor principal: Yu-Yang, Lin
Otros Autores: Wan-Jen, Huang, Yeh, Chia-Hung
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3171120121
003 UK-CbPIL
022 |a 2077-1312 
024 7 |a 10.3390/jmse13020231  |2 doi 
035 |a 3171120121 
045 2 |b d20250101  |b d20251231 
084 |a 231479  |2 nlm 
100 1 |a Yu-Yang, Lin  |u Institute of Communications Engineering, National Sun Yat-Sen University, Kaohsiung 80404, Taiwan<email>wjhuang@faculty.nsysu.edu.tw</email> (W.-J.H.) 
245 1 |a Dual-CycleGANs with Dynamic Guidance for Robust Underwater Image Restoration 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a The field of underwater image processing has gained significant attention recently, offering great potential for enhanced exploration of underwater environments, including applications such as underwater terrain scanning and autonomous underwater vehicles. However, underwater images frequently face challenges such as light attenuation, color distortion, and noise introduced by artificial light sources. These degradations not only affect image quality but also hinder the effectiveness of related application tasks. To address these issues, this paper presents a novel deep network model for single under-water image restoration. Our model does not rely on paired training images and incorporates two cycle-consistent generative adversarial network (CycleGAN) structures, forming a dual-CycleGAN architecture. This enables the simultaneous conversion of an underwater image to its in-air (atmospheric) counterpart while learning a light field image to guide the underwater image towards its in-air version. Experimental results indicate that the proposed method provides superior (or at least comparable) image restoration performance, both in terms of quantitative measures and visual quality, when compared to existing state-of-the-art techniques. Our model significantly reduces computational complexity, resulting in a more efficient approach that maintains superior restoration capabilities, ensuring faster processing times and lower memory usage, making it highly suitable for real-world applications. 
653 |a Autonomous underwater vehicles 
653 |a Datasets 
653 |a Deep learning 
653 |a Photodegradation 
653 |a Light sources 
653 |a Underwater exploration 
653 |a Neural networks 
653 |a Generative adversarial networks 
653 |a Image processing 
653 |a Image restoration 
653 |a Underwater vehicles 
653 |a Methods 
653 |a Light attenuation 
653 |a Image quality 
653 |a Light 
653 |a Environmental 
700 1 |a Wan-Jen, Huang  |u Institute of Communications Engineering, National Sun Yat-Sen University, Kaohsiung 80404, Taiwan<email>wjhuang@faculty.nsysu.edu.tw</email> (W.-J.H.) 
700 1 |a Yeh, Chia-Hung  |u Department of Electrical Engineering, National Taiwan Normal University, Taipei 10610, Taiwan; Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80404, Taiwan 
773 0 |t Journal of Marine Science and Engineering  |g vol. 13, no. 2 (2025), p. 231 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3171120121/abstract/embedded/160PP4OP4BJVV2EV?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3171120121/fulltextwithgraphics/embedded/160PP4OP4BJVV2EV?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3171120121/fulltextPDF/embedded/160PP4OP4BJVV2EV?source=fedsrch