Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability

محفوظ في:
التفاصيل البيبلوغرافية
الحاوية / القاعدة:Remote Sensing vol. 17, no. 11 (2025), p. 1943
المؤلف الرئيسي: Chen Tianrui
مؤلفون آخرون: Zhang Limeng, Guo Weiwei, Zhang Zenghui, Datcu Mihai
منشور في:
MDPI AG
الموضوعات:
الوصول للمادة أونلاين:Citation/Abstract
Full Text + Graphics
Full Text - PDF
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
مستخلص:Deep neural networks (DNNs) have shown strong performance in synthetic aperture radar (SAR) image classification. However, their “black-box” nature limits interpretability and poses challenges for robustness, which is critical for sensitive applications such as disaster assessment, environmental monitoring, and agricultural insurance. This study systematically evaluates the adversarial robustness of five representative DNNs (VGG11/16, ResNet18/101, and A-ConvNet) under a variety of attack and defense settings. Using eXplainable AI (XAI) techniques and attribution-based visualizations, we analyze how adversarial perturbations and adversarial training affect model behavior and decision logic. Our results reveal significant robustness differences across architectures, highlight interpretability limitations, and suggest practical guidelines for building more robust SAR classification systems. We also discuss challenges associated with large-scale, multi-class land use and land cover (LULC) classification under adversarial conditions.
تدمد:2072-4292
DOI:10.3390/rs17111943
المصدر:Advanced Technologies & Aerospace Database