QiGSAN: A Novel Probability-Informed Approach for Small Object Segmentation in the Case of Limited Image Datasets

Guardado en:
Detalles Bibliográficos
Publicado en:Big Data and Cognitive Computing vol. 9, no. 9 (2025), p. 239-264
Autor principal: Gorshenin Andrey
Otros Autores: Dostovalova Anastasia
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:The paper presents a novel probability-informed approach to improving the accuracy of small object semantic segmentation in high-resolution imagery datasets with imbalanced classes and a limited volume of samples. Small objects imply having a small pixel footprint on the input image, for example, ships in the ocean. Informing in this context means using mathematical models to represent data in the layers of deep neural networks. Thus, the ensemble Quadtree-informed Graph Self-Attention Networks (QiGSANs) are proposed. New architectural blocks, informed by types of Markov random fields such as quadtrees, have been introduced to capture the interconnections between features in images at different spatial resolutions during the graph convolution of superpixel subregions. It has been analytically proven that quadtree-informed graph convolutional neural networks, a part of QiGSAN, tend to achieve faster loss reduction compared to convolutional architectures. This justifies the effectiveness of probability-informed modifications based on quadtrees. To empirically demonstrate the processing of real small data with imbalanced object classes using QiGSAN, two open datasets of synthetic aperture radar (SAR) imagery (up to <inline-formula>0.5</inline-formula> m per pixel) are used: the High Resolution SAR Images Dataset (HRSID) and the SAR Ship Detection Dataset (SSDD). The results of QiGSAN are compared to those of the transformers SegFormer and LWGANet, which constitute a new state-of-the-art model for UAV (Unmanned Aerial Vehicles) and SAR image processing. They are also compared to convolutional neural networks and several ensemble implementations using other graph neural networks. QiGSAN significantly increases the <inline-formula>F1</inline-formula>-score values by up to <inline-formula>63.93%</inline-formula>, <inline-formula>48.57%</inline-formula>, and <inline-formula>9.84%</inline-formula> compared to transformers, convolutional neural networks, and other ensemble architectures, respectively. QiGSAN outperformed the base segmentors with the mIOU (mean intersection-over-union) metric too: the highest increase was <inline-formula>35.79%</inline-formula>. Therefore, our approach to knowledge extraction using mathematical models allows us to significantly improve modern computer vision techniques for imbalanced data.
ISSN:2504-2289
DOI:10.3390/bdcc9090239
Fuente:Advanced Technologies & Aerospace Database