AEFusion: Adaptive Enhanced Fusion of Visible and Infrared Images for Night Vision
Guardado en:
| Udgivet i: | Remote Sensing vol. 17, no. 18 (2025), p. 3129-3154 |
|---|---|
| Hovedforfatter: | |
| Andre forfattere: | , , , , |
| Udgivet: |
MDPI AG
|
| Fag: | |
| Online adgang: | Citation/Abstract Full Text + Graphics Full Text - PDF |
| Tags: |
Ingen Tags, Vær først til at tagge denne postø!
|
| Resumen: | <sec sec-type="highlights"> What are the main findings? <list list-type="bullet"> <list-item> </list-item>We propose a deep learning-based visible–infrared fusion framework with local adaptive enhancement and ResNet152-LDA feature integration. <list-item> Our method achieves superior performance over state-of-the-art methods in both objective metrics and subjective visual quality. </list-item> What is the implication of the main findings? <list list-type="bullet"> <list-item> </list-item>Provides a robust solution for preserving critical details in night vision image fusion. <list-item> Offers practical support for intelligent driving and low-visibility imaging applications. </list-item> Under night vision conditions, visible-spectrum images often fail to capture background details. Conventional visible and infrared fusion methods generally overlay thermal signatures without preserving latent features in low-visibility regions. This paper proposes a novel deep learning-based fusion algorithm to enhance visual perception in night driving scenarios. Firstly, a local adaptive enhancement algorithm corrects underexposed and overexposed regions in visible images, thereby preventing oversaturation during brightness adjustment. Secondly, ResNet152 extracts hierarchical feature maps from enhanced visible and infrared inputs. Max pooling and average pooling operations preserve critical features and distinct information across these feature maps. Finally, Linear Discriminant Analysis (LDA) reduces dimensionality and decorrelates features. We reconstruct the fused image by the weighted integration of the source images. The experimental results on benchmark datasets show that our approach outperforms state-of-the-art methods in both objective metrics and subjective visual assessments. |
|---|---|
| ISSN: | 2072-4292 |
| DOI: | 10.3390/rs17183129 |
| Fuente: | Advanced Technologies & Aerospace Database |