Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security
Guardado en:
| Publicado en: | The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Conference Proceedings (2025), p. 1653-1664 |
|---|---|
| Autor principal: | |
| Otros Autores: | , , , , , , , , |
| Publicado: |
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
|
| Materias: | |
| Acceso en línea: | Citation/Abstract |
| Etiquetas: |
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
| Resumen: | Conference Title: 2025 IEEE International Conference on Quantum Computing and Engineering (QCE)Conference Start Date: 2025 Aug. 30Conference End Date: 2025 Sept. 5Conference Location: Albuquerque, NM, USAQuantum Machine Learning (QML) systems inherit vulnerabilities from classical machine learning while introducing new attack surfaces rooted in the physical and algorithmic layers of quantum computing. Despite a growing body of research on individual attack vectors - ranging from adversarial poisoning and evasion to circuit-level backdoors, side-channel leakage, and model extraction - these threats are often analyzed in isolation, with unrealistic assumptions about attacker capabilities and system environments. This fragmentation hampers the development of effective, holistic defense strategies. In this work, we argue that QML security requires more structured modeling of the attack surface, capturing not only individual techniques but also their relationships, prerequisites, and potential impact across the QML pipeline. We propose adapting kill chain models, widely used in classical IT and cybersecurity, to the quantum machine learning context. Such models allow for structured reasoning about attacker objectives, capabilities, and possible multi-stage attack paths - spanning reconnaissance, initial access, manipulation, persistence, and exfiltration. Based on extensive literature analysis, we present a detailed taxonomy of QML attack vectors mapped to corresponding stages in a quantum-aware kill chain framework that is inspired by the MITRE ATLAS for classical machine learning. We highlight interdependencies between physical-level threats (like side-channel leakage and crosstalk faults), data and algorithm manipulation (such as poisoning or circuit backdoors), and privacy attacks (including model extraction and training data inference). This work provides a foundation for more realistic threat modeling and proactive security-in-depth design in the emerging field of quantum machine learning. |
|---|---|
| DOI: | 10.1109/QCE65121.2025.00183 |
| Fuente: | Science Database |