Polarization-Guided Deep Fusion for Real-Time Enhancement of Day–Night Tunnel Traffic Scenes: Dataset, Algorithm, and Network

محفوظ في:
التفاصيل البيبلوغرافية
الحاوية / القاعدة:Photonics vol. 12, no. 12 (2025), p. 1206-1227
المؤلف الرئيسي: Rao Renhao
مؤلفون آخرون: Cui Changcai, Chen, Liang, Ouyang Zhizhao, Chen, Shuang
منشور في:
MDPI AG
الموضوعات:
الوصول للمادة أونلاين:Citation/Abstract
Full Text + Graphics
Full Text - PDF
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!

MARC

LEADER 00000nab a2200000uu 4500
001 3286335860
003 UK-CbPIL
022 |a 2304-6732 
024 7 |a 10.3390/photonics12121206  |2 doi 
035 |a 3286335860 
045 2 |b d20250101  |b d20251231 
084 |a 231546  |2 nlm 
100 1 |a Rao Renhao  |u Institute of Manufacturing Engineering, Huaqiao University, Xiamen 361021, China; rrh@stu.hqu.edu.cn (R.R.); ouyangzz@stu.hqu.edu.cn (Z.O.); 22013080004@stu.hqu.edu.cn (S.C.) 
245 1 |a Polarization-Guided Deep Fusion for Real-Time Enhancement of Day–Night Tunnel Traffic Scenes: Dataset, Algorithm, and Network 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure scenario, this paper proposes a closed-loop enhancement solution centered on polarization imaging as a core physical prior, comprising a real-world polarimetric road dataset, a polarimetric physics-enhanced algorithm, and a beyond-fusion network, while satisfying both perception enhancement and real-time constraints. First, we construct the POLAR-GLV dataset, which is captured using a four-angle polarization camera under real highway tunnel conditions, covering the entire process of entering tunnels, inside tunnels, and exiting tunnels, systematically collecting data on adverse illumination and failure distributions in day–night traffic scenes. Second, we propose the Polarimetric Physical Enhancement with Adaptive Modulation (PPEAM) method, which uses Stokes parameters, DoLP, and AoLP as constraints. Leveraging the glare sensitivity of DoLP and richer texture information, it adaptively performs dark region enhancement and glare suppression according to scene brightness and dark region ratio, providing real-time polarization-based image enhancement. Finally, we design the Polar-PENet beyond-fusion network, which introduces Polarization-Aware Gates (PAG) and CBAM on top of physical priors, coupled with detection-driven perception-oriented loss and a beyond mechanism to explicitly fuse physics and deep semantics to surpass physical limitations. Experimental results show that compared to original images, Polar-PENet (beyond-fusion network) achieves PSNR and SSIM scores of 19.37 and 0.5487, respectively, on image quality metrics, surpassing the performance of PPEAM (polarimetric physics-enhanced algorithm) which scores 18.89 and 0.5257. In terms of downstream object detection performance, Polar-PENet performs exceptionally well in areas with drastic illumination changes such as tunnel entrances and exits, achieving a mAP of 63.7%, representing a 99.7% improvement over original images and a 12.1% performance boost over PPEAM’s 56.8%. In terms of processing speed, Polar-PENet is 2.85 times faster than the physics-enhanced algorithm PPEAM, with an inference speed of 183.45 frames per second, meeting the real-time requirements of autonomous driving and laying a solid foundation for practical deployment in edge computing environments. The research validates the effective paradigm of using polarimetric physics as a prior and surpassing physics through learning methods. 
653 |a Lighting systems 
653 |a Stokes parameters 
653 |a Deep learning 
653 |a Datasets 
653 |a Polarimetry 
653 |a Tunnels 
653 |a Algorithms 
653 |a Physics 
653 |a Glare 
653 |a Parameter sensitivity 
653 |a Real time 
653 |a Illumination 
653 |a Edge computing 
653 |a Closed loops 
653 |a Adaptation 
653 |a Polarization 
653 |a Dark adaptation 
653 |a Visual perception 
653 |a Vehicles 
653 |a Cameras 
653 |a Semantics 
653 |a Image enhancement 
653 |a Sensors 
653 |a Entrances 
653 |a Perception 
653 |a Night 
653 |a Image quality 
653 |a Object recognition 
653 |a Light 
653 |a Constraints 
653 |a Temporal perception 
700 1 |a Cui Changcai  |u College of Metrology Measurement and Instrument, China Jiliang University, Hangzhou 310018, China 
700 1 |a Chen, Liang  |u Fujian Intelligent Connected Vehicle Product Quality Inspection Center, Xiamen 361004, China; chenliang@xmzjy.org 
700 1 |a Ouyang Zhizhao  |u Institute of Manufacturing Engineering, Huaqiao University, Xiamen 361021, China; rrh@stu.hqu.edu.cn (R.R.); ouyangzz@stu.hqu.edu.cn (Z.O.); 22013080004@stu.hqu.edu.cn (S.C.) 
700 1 |a Chen, Shuang  |u Institute of Manufacturing Engineering, Huaqiao University, Xiamen 361021, China; rrh@stu.hqu.edu.cn (R.R.); ouyangzz@stu.hqu.edu.cn (Z.O.); 22013080004@stu.hqu.edu.cn (S.C.) 
773 0 |t Photonics  |g vol. 12, no. 12 (2025), p. 1206-1227 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3286335860/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3286335860/fulltextwithgraphics/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3286335860/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch