BMFusion: Bridging the Gap Between Dark and Bright in Infrared-Visible Imaging Fusion

Uloženo v:
Podrobná bibliografie
Vydáno v:Electronics vol. 13, no. 24 (2024), p. 5005
Hlavní autor: Liu, Chengwen
Další autoři: Liao, Bin, Chang, Zhuoyue
Vydáno:
MDPI AG
Témata:
On-line přístup:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!

MARC

LEADER 00000nab a2200000uu 4500
001 3149599311
003 UK-CbPIL
022 |a 2079-9292 
024 7 |a 10.3390/electronics13245005  |2 doi 
035 |a 3149599311 
045 2 |b d20240101  |b d20241231 
084 |a 231458  |2 nlm 
100 1 |a Liu, Chengwen 
245 1 |a BMFusion: Bridging the Gap Between Dark and Bright in Infrared-Visible Imaging Fusion 
260 |b MDPI AG  |c 2024 
513 |a Journal Article 
520 3 |a The fusion of infrared and visible light images is a crucial technology for enhancing visual perception in complex environments. It plays a pivotal role in improving visual perception and subsequent performance in advanced visual tasks. However, due to the significant degradation of visible light image quality in low-light or nighttime scenes, most existing fusion methods often struggle to obtain sufficient texture details and salient features when processing such scenes. This can lead to a decrease in fusion quality. To address this issue, this article proposes a new image fusion method called BMFusion. Its aim is to significantly improve the quality of fused images in low-light or nighttime scenes and generate high-quality fused images around the clock. This article first designs a brightness attention module composed of brightness attention units. It extracts multimodal features by combining the SimAm attention mechanism with a Transformer architecture. Effective enhancement of brightness and features has been achieved, with gradual brightness attention performed during feature extraction. Secondly, a complementary fusion module was designed. This module deeply fuses infrared and visible light features to ensure the complementarity and enhancement of each modal feature during the fusion process, minimizing information loss to the greatest extent possible. In addition, a feature reconstruction network combining CLIP-guided semantic vectors and neighborhood attention enhancement was proposed in the feature reconstruction stage. It uses the KAN module to perform channel adaptive optimization on the reconstruction process, ensuring semantic consistency and detail integrity of the fused image during the reconstruction phase. The experimental results on a large number of public datasets demonstrate that the BMFusion method can generate fusion images with higher visual quality and richer details in night and low-light environments compared with various existing state-of-the-art (SOTA) algorithms. At the same time, the fusion image can significantly improve the performance of advanced visual tasks. This shows the great potential and application prospect of this method in the field of multimodal image fusion. 
653 |a Feature extraction 
653 |a Visual tasks 
653 |a Performance enhancement 
653 |a Semantics 
653 |a Deep learning 
653 |a Photodegradation 
653 |a Image reconstruction 
653 |a Visual perception 
653 |a Task complexity 
653 |a Brightness 
653 |a Visual fields 
653 |a Image degradation 
653 |a Attention 
653 |a Night 
653 |a Infrared imagery 
653 |a Integrated approach 
653 |a Computer vision 
653 |a Modules 
653 |a Algorithms 
653 |a Image quality 
653 |a Infrared imaging 
653 |a Light 
653 |a Visual perception driven algorithms 
700 1 |a Liao, Bin 
700 1 |a Chang, Zhuoyue 
773 0 |t Electronics  |g vol. 13, no. 24 (2024), p. 5005 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3149599311/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3149599311/fulltextwithgraphics/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3149599311/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch