Adversarial Training for Aerial Disaster Recognition: A Curriculum-Based Defense Against PGD Attacks

Guardado en:
Detalles Bibliográficos
Publicado en:Electronics vol. 14, no. 16 (2025), p. 3210-3225
Autor principal: Kose Kubra
Otros Autores: Zhou, Bing
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Unmanned aerial vehicles (UAVs) play an ever-increasing role in disaster response and remote sensing. However, the deep learning models they rely on remain highly vulnerable to adversarial attacks. This paper presents an evaluation and defense framework aimed at enhancing adversarial robustness in aerial disaster image classification using the AIDERV2 dataset. Our methodology is structured into the following four phases: (I) baseline training with clean data using ResNet-50, (II) vulnerability assessment under Projected Gradient Descent (PGD) attacks, (III) adversarial training with PGD to improve model resilience, and (IV) comprehensive post-defense evaluation under identical attack scenarios. The baseline model achieves 93.25% accuracy on clean data but drops to as low as 21.00% under strong adversarial perturbations. In contrast, the adversarially trained model maintains over 75.00% accuracy across all PGD configurations, reducing the attack success rate by more than 60%. We introduce metrics, such as Clean Accuracy, Adversarial Accuracy, Accuracy Drop, and Attack Success Rate, to evaluate defense performance. Our results show the practical importance of adversarial training for safety-critical UAV applications and provide a reference point for future research. This work contributes to making deep learning systems on aerial platforms more secure, robust, and reliable in mission-critical environments.
ISSN:2079-9292
DOI:10.3390/electronics14163210
Fuente:Advanced Technologies & Aerospace Database