AI Adversarial Attack Detection and Mitigation for AI-Based Systems

Guardat en:
Dades bibliogràfiques
Publicat a:PQDT - Global (2025)
Autor principal: Ziras, Georgios
Publicat:
ProQuest Dissertations & Theses
Matèries:
Accés en línia:Citation/Abstract
Full Text - PDF
Full text outside of ProQuest
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3224597418
003 UK-CbPIL
020 |a 9798286421312 
035 |a 3224597418 
045 2 |b d20250101  |b d20251231 
084 |a 189128  |2 nlm 
100 1 |a Ziras, Georgios 
245 1 |a AI Adversarial Attack Detection and Mitigation for AI-Based Systems 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a The increasing integration of Artificial Intelligence (AI) systems into critical infrastructure such as cybersecurity, healthcare, and finance has introduced significant challenges in ensuring model robustness against adversarial attacks. This thesis investigates the susceptibility of various machine learning (ML) models to adversarial manipulations and explores effective detection and mitigation strategies to enhance resilience. Using the CIC-IDS2017 dataset, a suite of ML classifiers—including Decision Tree, Random Forest, Logistic Regression, XGBoost, and a custom PyTorch-based neural network—were trained and subjected to a range of adversarial evasion attacks including FGSM, PGD, DeepFool, and Carlini-Wagner.A key focus of this research is the evaluation of both direct and transfer adversarial attacks, revealing that while traditional models suffered severe performance degradation, deep learning models exhibited stronger resilience. To improve robustness, adversarial training was employed, significantly enhancing model accuracy under attack, particularly for the PyTorch model, which retained over 98% accuracy in most cases.Furthermore, this study integrates advanced detection mechanisms using the Adversarial Robustness Toolbox (ART), including Binary Input and Binary Activation Detectors. These detectors demonstrated high recall and precision in identifying adversarial inputs, although moderate performance on clean samples suggests a trade-off between security and usability. The implementation of a dual-layer detection pipeline within a machine learning system illustrates a practical defense-in-depth approach, capable of blocking or flagging adversarial inputs before reaching the core classifier.This research contributes a comprehensive analysis of adversarial attack resilience in intrusion detection systems and proposes a scalable architecture for integrating robust training and real-time adversarial detection. Future work will focus on enhancing detection precision for clean samples, incorporating more diverse datasets, and exploring adaptive defenses to counter evolving attack strategies. 
653 |a Deep learning 
653 |a Distributed network protocols 
653 |a Intrusion detection systems 
653 |a Optimization techniques 
653 |a Cybersecurity 
653 |a Privacy 
653 |a Probability distribution 
653 |a Entropy 
653 |a Internet of Things 
653 |a Business metrics 
653 |a Machine learning 
653 |a Artificial intelligence 
653 |a Autonomous vehicles 
653 |a Neural networks 
653 |a Medical research 
653 |a Defense mechanisms 
653 |a Natural language processing 
653 |a Information technology 
653 |a Medicine 
653 |a Statistics 
653 |a Transportation 
773 0 |t PQDT - Global  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3224597418/abstract/embedded/IZYTEZ3DIR4FRXA2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3224597418/fulltextPDF/embedded/IZYTEZ3DIR4FRXA2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u https://dione.lib.unipi.gr/xmlui/handle/unipi/17622