Adversarial machine learning: a review of methods, tools, and critical industry sectors
Gespeichert in:
| Veröffentlicht in: | The Artificial Intelligence Review vol. 58, no. 8 (Aug 2025), p. 226 |
|---|---|
| 1. Verfasser: | |
| Weitere Verfasser: | , , , , , , |
| Veröffentlicht: |
Springer Nature B.V.
|
| Schlagworte: | |
| Online-Zugang: | Citation/Abstract Full Text Full Text - PDF |
| Tags: |
Keine Tags, Fügen Sie das erste Tag hinzu!
|
| Abstract: | The rapid advancement of Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), has produced high-performance models widely used in various applications, ranging from image recognition and chatbots to autonomous driving and smart grid systems. However, security threats arise from the vulnerabilities of ML models to adversarial attacks and data poisoning, posing risks such as system malfunctions and decision errors. Meanwhile, data privacy concerns arise, especially with personal data being used in model training, which can lead to data breaches. This paper surveys the Adversarial Machine Learning (AML) landscape in modern AI systems, while focusing on the dual aspects of robustness and privacy. Initially, we explore adversarial attacks and defenses using comprehensive taxonomies. Subsequently, we investigate robustness benchmarks alongside open-source AML technologies and software tools that ML system stakeholders can use to develop robust AI systems. Lastly, we delve into the landscape of AML in four industry fields –automotive, digital healthcare, electrical power and energy systems (EPES), and Large Language Model (LLM)-based Natural Language Processing (NLP) systems– analyzing attacks, defenses, and evaluation concepts, thereby offering a holistic view of the modern AI-reliant industry and promoting enhanced ML robustness and privacy preservation in the future. |
|---|---|
| ISSN: | 0269-2821 1573-7462 |
| DOI: | 10.1007/s10462-025-11147-4 |
| Quelle: | ABI/INFORM Global |