AEFusion: Adaptive Enhanced Fusion of Visible and Infrared Images for Night Vision

محفوظ في:
التفاصيل البيبلوغرافية
الحاوية / القاعدة:Remote Sensing vol. 17, no. 18 (2025), p. 3129-3154
المؤلف الرئيسي: Wang, Xiaozhu
مؤلفون آخرون: Zhang, Chenglong, Hu, Jianming, Qin, Wen, Zhang, Guifeng, Huang, Min
منشور في:
MDPI AG
الموضوعات:
الوصول للمادة أونلاين:Citation/Abstract
Full Text + Graphics
Full Text - PDF
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
مستخلص:<sec sec-type="highlights"> What are the main findings? <list list-type="bullet"> <list-item> </list-item>We propose a deep learning-based visible–infrared fusion framework with local adaptive enhancement and ResNet152-LDA feature integration. <list-item> Our method achieves superior performance over state-of-the-art methods in both objective metrics and subjective visual quality. </list-item> What is the implication of the main findings? <list list-type="bullet"> <list-item> </list-item>Provides a robust solution for preserving critical details in night vision image fusion. <list-item> Offers practical support for intelligent driving and low-visibility imaging applications. </list-item> Under night vision conditions, visible-spectrum images often fail to capture background details. Conventional visible and infrared fusion methods generally overlay thermal signatures without preserving latent features in low-visibility regions. This paper proposes a novel deep learning-based fusion algorithm to enhance visual perception in night driving scenarios. Firstly, a local adaptive enhancement algorithm corrects underexposed and overexposed regions in visible images, thereby preventing oversaturation during brightness adjustment. Secondly, ResNet152 extracts hierarchical feature maps from enhanced visible and infrared inputs. Max pooling and average pooling operations preserve critical features and distinct information across these feature maps. Finally, Linear Discriminant Analysis (LDA) reduces dimensionality and decorrelates features. We reconstruct the fused image by the weighted integration of the source images. The experimental results on benchmark datasets show that our approach outperforms state-of-the-art methods in both objective metrics and subjective visual assessments.
تدمد:2072-4292
DOI:10.3390/rs17183129
المصدر:Advanced Technologies & Aerospace Database