A Comparative Analysis of Adversarial Robustness for Quantum and Classical Machine Learning Models

Guardado en:
Bibliografiske detaljer
Udgivet i:The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Conference Proceedings vol. 01 (2024)
Hovedforfatter: Wendlinger, Maximilian
Andre forfattere: Kilian Tscharke, Debus, Pascal
Udgivet:
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Fag:
Online adgang:Citation/Abstract
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3153928496
003 UK-CbPIL
024 7 |a 10.1109/QCE60285.2024.00171  |2 doi 
035 |a 3153928496 
045 2 |b d20240101  |b d20241231 
084 |a 228229  |2 nlm 
100 1 |a Wendlinger, Maximilian  |u Fraunhofer Institute for Applied and Integrated Security,Quantum Security Technologies,Garching near Munich,Germany 
245 1 |a A Comparative Analysis of Adversarial Robustness for Quantum and Classical Machine Learning Models 
260 |b The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  |c 2024 
513 |a Conference Proceedings 
520 3 |a Conference Title: 2024 IEEE International Conference on Quantum Computing and Engineering (QCE)Conference Start Date: 2024, Sept. 15 Conference End Date: 2024, Sept. 20 Conference Location: Montreal, QC, CanadaQuantum machine learning (QML) continues to be an area of tremendous interest from research and industry. While QML models have been shown to be vulnerable to adversarial attacks much in the same manner as classical machine learning models, it is still largely unknown how to compare adversarial attacks on quantum versus classical models. In this paper, we show how to systematically investigate the similarities and differences in adversarial robustness of classical and quantum models using transfer attacks, perturbation patterns and Lipschitz bounds. More specifically, we focus on classification tasks on a handcrafted dataset that allows quantitative analysis for feature attribution. This enables us to get insight, both theoretically and experimentally, on the robustness of classification networks. We start by comparing typical QML model architectures such as amplitude and re-upload encoding circuits with variational parameters to a classical ConvNet architecture. Next, we introduce a classical approximation of QML circuits (originally obtained with Random Fourier Features sampling but adapted in this work to fit a trainable encoding) and evaluate this model, denoted Fourier network, in comparison to other architectures. Our findings show that this Fourier network can be seen as a “middle ground” on the quantum-classical boundary. While adversarial attacks successfully transfer across this boundary in both directions, we also show that regularization helps quantum networks to be more robust, which has direct impact on Lipschitz bounds and transfer attacks. 
653 |a Machine learning 
653 |a Regularization 
653 |a Parameter robustness 
653 |a Quantum computing 
653 |a Circuits 
653 |a Classification 
653 |a Robustness 
653 |a Coding 
653 |a Social 
653 |a Economic 
700 1 |a Kilian Tscharke  |u Fraunhofer Institute for Applied and Integrated Security,Quantum Security Technologies,Garching near Munich,Germany 
700 1 |a Debus, Pascal  |u Fraunhofer Institute for Applied and Integrated Security,Quantum Security Technologies,Garching near Munich,Germany 
773 0 |t The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Conference Proceedings  |g vol. 01 (2024) 
786 0 |d ProQuest  |t Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3153928496/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch