GPRNet: A Geometric Prior-Refined Semantic Segmentation Network for Land Use and Land Cover Mapping

Furkejuvvon:
Bibliográfalaš dieđut
Publikašuvnnas:Remote Sensing vol. 17, no. 23 (2025), p. 3856-3885
Váldodahkki: Li Zhuozheng
Eará dahkkit: Xu Zhennan, Xia Runliang, Sun Jiahao, Mu Ruihui, Chen, Liang, Liu Daofang, Li, Xin
Almmustuhtton:
MDPI AG
Fáttát:
Liŋkkat:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Fáddágilkorat: Lasit fáddágilkoriid
Eai fáddágilkorat, Lasit vuosttaš fáddágilkora!

MARC

LEADER 00000nab a2200000uu 4500
001 3280962992
003 UK-CbPIL
022 |a 2072-4292 
024 7 |a 10.3390/rs17233856  |2 doi 
035 |a 3280962992 
045 2 |b d20250101  |b d20251231 
084 |a 231556  |2 nlm 
100 1 |a Li Zhuozheng  |u College of Hydrology and Water Resources, Hohai University, Nanjing 210098, China; zhuozhengli@hhu.edu.cn 
245 1 |a GPRNet: A Geometric Prior-Refined Semantic Segmentation Network for Land Use and Land Cover Mapping 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a <sec sec-type="highlights"> What are the main findings? <list list-type="bullet"> <list-item> </list-item>We propose GPRNet, a geometry-aware semantic segmentation framework that integrates a Geometric Prior-Refined Block (GPRB) and a Mutual Calibrated Fusion Module (MCFM) to enhance boundary sensitivity and cross-stage semantic consistency. <list-item> GPRB leverages learnable directional derivatives to construct structure-aware strength and orientation maps, enabling more accurate spatial localization in complex scenes. </list-item> <list-item> MCFM introduces geometric alignment and semantic enhancement mechanisms that effectively reduce the encoder–decoder feature gap. </list-item> <list-item> GPRNet achieves consistent performance gains on ISPRS Potsdam and LoveDA, improving mIoU by up to 1.7% and 1.3% respectively over strong CNN-, attention-, and transformer-based baselines. </list-item> What are the implications of the main findings? <list list-type="bullet"> <list-item> </list-item>Incorporating geometric priors through learnable gradient-based features improves the model’s ability to capture structural patterns and preserve fine boundaries in high-resolution remote sensing imagery. <list-item> The mutual calibration mechanism demonstrates an effective design for encoder–decoder interaction, showing potential for broader applicability across segmentation architectures and modalities. </list-item> <list-item> The empirical evidence indicates that geometry-informed representation learning can serve as a general principle for enhancing land-cover mapping in diverse and structurally complex environments. </list-item> Semantic segmentation of high-resolution remote sensing images remains a challenging task due to the intricate spatial structures, scale variability, and semantic ambiguity among ground objects. Moreover, the reliable delineation of fine-grained boundaries continues to impose difficulties on existing CNN- and transformer-based models, particularly in heterogeneous urban and rural environments. In this study, we propose GPRNet, a novel geometry-aware segmentation framework that leverages geometric priors and cross-stage semantic alignment for more precise land-cover classification. Central to our approach is the Geometric Prior-Refined Block (GPRB), which learns directional derivative filters, initialized with Sobel-like operators, to generate edge-aware strength and orientation maps that explicitly encode structural cues. These maps are used to guide structure-aware attention modulation, enabling refined spatial localization. Additionally, we introduce the Mutual Calibrated Fusion Module (MCFM) to mitigate the semantic gap between encoder and decoder features by incorporating cross-stage geometric alignment and semantic enhancement mechanisms. Extensive experiments conducted on the ISPRS Potsdam and LoveDA datasets validate the effectiveness of the proposed method, with GPRNet achieving improvements of up to 1.7% mIoU on Potsdam and 1.3% mIoU on LoveDA over strong recent baselines. Furthermore, the model maintains competitive inference efficiency, suggesting a favorable balance between accuracy and computational cost. These results demonstrate the promising potential of geometric-prior integration and mutual calibration in advancing semantic segmentation in complex environments. 
653 |a Accuracy 
653 |a Deep learning 
653 |a Image resolution 
653 |a Calibration 
653 |a Artificial neural networks 
653 |a Boundaries 
653 |a Biodiversity 
653 |a Remote sensing 
653 |a Mapping 
653 |a Attention 
653 |a Architecture 
653 |a Image processing 
653 |a Spatial discrimination 
653 |a Land use 
653 |a Semantic segmentation 
653 |a Modules 
653 |a Localization 
653 |a Rural environments 
653 |a Maps 
653 |a Machine learning 
653 |a Land cover 
653 |a Coders 
653 |a Alignment 
653 |a Image segmentation 
653 |a Operators (mathematics) 
653 |a High resolution 
653 |a Effectiveness 
653 |a Urban environments 
653 |a Semantics 
700 1 |a Xu Zhennan  |u College of Computer Science and Software Engineering, Hohai University, Nanjing 211100, China; zhennanxu@hhu.edu.cn 
700 1 |a Xia Runliang  |u Information Center, Ministry of Water Resources, Beijing 100053, China; r.xia@163.com 
700 1 |a Sun Jiahao  |u School of Design and Art, Changsha University of Science and Technology, Changsha 410114, China; sunjiahao@csust.edu.cn 
700 1 |a Mu Ruihui  |u College of Computer and Information Engineering, Xinxiang University, Xinxiang 453000, China; muruihui@126.com 
700 1 |a Chen, Liang  |u Information Center, Yellow River Conservancy Commission (YRCC), Zhengzhou 450000, China; chlg564@163.com (L.C.); liudaofang@yrcc.gov.cn (D.L.) 
700 1 |a Liu Daofang  |u Information Center, Yellow River Conservancy Commission (YRCC), Zhengzhou 450000, China; chlg564@163.com (L.C.); liudaofang@yrcc.gov.cn (D.L.) 
700 1 |a Li, Xin  |u College of Computer Science and Software Engineering, Hohai University, Nanjing 211100, China; zhennanxu@hhu.edu.cn 
773 0 |t Remote Sensing  |g vol. 17, no. 23 (2025), p. 3856-3885 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3280962992/abstract/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3280962992/fulltextwithgraphics/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3280962992/fulltextPDF/embedded/75I98GEZK8WCJMPQ?source=fedsrch