AEFusion: Adaptive Enhanced Fusion of Visible and Infrared Images for Night Vision

Guardado en:
Bibliografiske detaljer
Udgivet i:Remote Sensing vol. 17, no. 18 (2025), p. 3129-3154
Hovedforfatter: Wang, Xiaozhu
Andre forfattere: Zhang, Chenglong, Hu, Jianming, Qin, Wen, Zhang, Guifeng, Huang, Min
Udgivet:
MDPI AG
Fag:
Online adgang:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3254636596
003 UK-CbPIL
022 |a 2072-4292 
024 7 |a 10.3390/rs17183129  |2 doi 
035 |a 3254636596 
045 2 |b d20250101  |b d20251231 
084 |a 231556  |2 nlm 
100 1 |a Wang, Xiaozhu  |u Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; wangxz@aircas.ac.cn (X.W.); wenqin@aircas.ac.cn (Q.W.); zhanggf@aircas.ac.cn (G.Z.); huangmin@aircas.ac.cn (M.H.) 
245 1 |a AEFusion: Adaptive Enhanced Fusion of Visible and Infrared Images for Night Vision 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a <sec sec-type="highlights"> What are the main findings? <list list-type="bullet"> <list-item> </list-item>We propose a deep learning-based visible–infrared fusion framework with local adaptive enhancement and ResNet152-LDA feature integration. <list-item> Our method achieves superior performance over state-of-the-art methods in both objective metrics and subjective visual quality. </list-item> What is the implication of the main findings? <list list-type="bullet"> <list-item> </list-item>Provides a robust solution for preserving critical details in night vision image fusion. <list-item> Offers practical support for intelligent driving and low-visibility imaging applications. </list-item> Under night vision conditions, visible-spectrum images often fail to capture background details. Conventional visible and infrared fusion methods generally overlay thermal signatures without preserving latent features in low-visibility regions. This paper proposes a novel deep learning-based fusion algorithm to enhance visual perception in night driving scenarios. Firstly, a local adaptive enhancement algorithm corrects underexposed and overexposed regions in visible images, thereby preventing oversaturation during brightness adjustment. Secondly, ResNet152 extracts hierarchical feature maps from enhanced visible and infrared inputs. Max pooling and average pooling operations preserve critical features and distinct information across these feature maps. Finally, Linear Discriminant Analysis (LDA) reduces dimensionality and decorrelates features. We reconstruct the fused image by the weighted integration of the source images. The experimental results on benchmark datasets show that our approach outperforms state-of-the-art methods in both objective metrics and subjective visual assessments. 
653 |a Visual perception 
653 |a Deep learning 
653 |a Wavelet transforms 
653 |a Algorithms 
653 |a Vision systems 
653 |a Vision 
653 |a Infrared imagery 
653 |a Computer vision 
653 |a Discriminant analysis 
653 |a Machine learning 
653 |a Radiation 
653 |a Visual perception driven algorithms 
653 |a Adaptive algorithms 
653 |a Remote sensing 
653 |a Image reconstruction 
653 |a Neural networks 
653 |a Visibility 
653 |a Night vision 
653 |a Feature maps 
653 |a Infrared signatures 
653 |a Methods 
700 1 |a Zhang, Chenglong  |u The School of Data Science, The Chinese University of Hong Kong, Shenzhen 518172, China 
700 1 |a Hu, Jianming  |u The School of Aerospace Engineering, Harbin Institute of Technology, Harbin 150001, China; hujianming@hit.edu.cn 
700 1 |a Qin, Wen  |u Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; wangxz@aircas.ac.cn (X.W.); wenqin@aircas.ac.cn (Q.W.); zhanggf@aircas.ac.cn (G.Z.); huangmin@aircas.ac.cn (M.H.) 
700 1 |a Zhang, Guifeng  |u Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; wangxz@aircas.ac.cn (X.W.); wenqin@aircas.ac.cn (Q.W.); zhanggf@aircas.ac.cn (G.Z.); huangmin@aircas.ac.cn (M.H.) 
700 1 |a Huang, Min  |u Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; wangxz@aircas.ac.cn (X.W.); wenqin@aircas.ac.cn (Q.W.); zhanggf@aircas.ac.cn (G.Z.); huangmin@aircas.ac.cn (M.H.) 
773 0 |t Remote Sensing  |g vol. 17, no. 18 (2025), p. 3129-3154 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3254636596/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3254636596/fulltextwithgraphics/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3254636596/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch