Joint Luminance-Saliency Prior and Attention for Underwater Image Quality Assessment

Guardado en:
Detalles Bibliográficos
Publicado en:Remote Sensing vol. 16, no. 16 (2024), p. 3021
Autor principal: Lin, Zhiqiang
Otros Autores: He, Zhouyan, Jin, Chongchong, Luo, Ting, Chen, Yeyao
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3098193888
003 UK-CbPIL
022 |a 2072-4292 
024 7 |a 10.3390/rs16163021  |2 doi 
035 |a 3098193888 
045 2 |b d20240101  |b d20241231 
084 |a 231556  |2 nlm 
100 1 |a Lin, Zhiqiang  |u College of Science and Technology, Ningbo University, Ningbo 315212, China; <email>2211170008@nbu.edu.cn</email> (Z.L.); <email>jinchongchong@nbu.edu.cn</email> (C.J.); <email>luoting@nbu.edu.cn</email> (T.L.) 
245 1 |a Joint Luminance-Saliency Prior and Attention for Underwater Image Quality Assessment 
260 |b MDPI AG  |c 2024 
513 |a Journal Article 
520 3 |a Underwater images, as a crucial medium for storing ocean information in underwater sensors, play a vital role in various underwater tasks. However, they are prone to distortion due to the imaging environment, which leads to a decline in visual quality, which is an urgent issue for various marine vision systems to address. Therefore, it is necessary to develop underwater image enhancement (UIE) and corresponding quality assessment methods. At present, most underwater image quality assessment (UIQA) methods primarily rely on extracting handcrafted features that characterize degradation attributes, which struggle to measure complex mixed distortions and often exhibit discrepancies with human visual perception in practical applications. Furthermore, current UIQA methods lack the consideration of the perception perspective of enhanced effects. To this end, this paper employs luminance and saliency priors as critical visual information for the first time to measure the enhancement effect of global and local quality achieved by the UIE algorithms, named JLSAU. The proposed JLSAU is built upon an overall pyramid-structured backbone, supplemented by the Luminance Feature Extraction Module (LFEM) and Saliency Weight Learning Module (SWLM), which aim at obtaining perception features with luminance and saliency priors at multiple scales. The supplement of luminance priors aims to perceive visually sensitive global distortion of luminance, including histogram statistical features and grayscale features with positional information. The supplement of saliency priors aims to perceive visual information that reflects local quality variation both in spatial and channel domains. Finally, to effectively model the relationship among different levels of visual information contained in the multi-scale features, the Attention Feature Fusion Module (AFFM) is proposed. Experimental results on the public UIQE and UWIQA datasets demonstrate that the proposed JLSAU outperforms existing state-of-the-art UIQA methods. 
653 |a Feature extraction 
653 |a Visual tasks 
653 |a Visual perception 
653 |a Salience 
653 |a Balances (scales) 
653 |a Image degradation 
653 |a Spatial discrimination 
653 |a Modules 
653 |a Visual stimuli 
653 |a Attention 
653 |a Visual effects 
653 |a Statistical models 
653 |a Visual perception driven algorithms 
653 |a Quality assessment 
653 |a Luminance 
653 |a Image enhancement 
653 |a Quality control 
653 |a Distortion 
653 |a Underwater detectors 
653 |a Supplements 
653 |a Algorithms 
653 |a Image quality 
653 |a Perception 
653 |a Vision systems 
653 |a Underwater 
653 |a Temporal perception 
653 |a Parameter estimation 
700 1 |a He, Zhouyan  |u College of Science and Technology, Ningbo University, Ningbo 315212, China; <email>2211170008@nbu.edu.cn</email> (Z.L.); <email>jinchongchong@nbu.edu.cn</email> (C.J.); <email>luoting@nbu.edu.cn</email> (T.L.) 
700 1 |a Jin, Chongchong  |u College of Science and Technology, Ningbo University, Ningbo 315212, China; <email>2211170008@nbu.edu.cn</email> (Z.L.); <email>jinchongchong@nbu.edu.cn</email> (C.J.); <email>luoting@nbu.edu.cn</email> (T.L.) 
700 1 |a Luo, Ting  |u College of Science and Technology, Ningbo University, Ningbo 315212, China; <email>2211170008@nbu.edu.cn</email> (Z.L.); <email>jinchongchong@nbu.edu.cn</email> (C.J.); <email>luoting@nbu.edu.cn</email> (T.L.) 
700 1 |a Chen, Yeyao  |u Faculty of Information Science and Engineering, Ningbo University, Ningbo 315212, China; <email>chenyeyao@nbu.edu.cn</email> 
773 0 |t Remote Sensing  |g vol. 16, no. 16 (2024), p. 3021 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3098193888/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3098193888/fulltextwithgraphics/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3098193888/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch