Federated learning with distributed fixed design quantum chips and quantum channels

Guardado en:
Bibliografiske detaljer
Udgivet i:arXiv.org (Oct 9, 2024), p. n/a
Hovedforfatter: Daskin, Ammar
Udgivet:
Cornell University Library, arXiv.org
Fag:
Online adgang:Citation/Abstract
Full text outside of ProQuest
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 2918404157
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2918404157 
045 0 |b d20241009 
100 1 |a Daskin, Ammar 
245 1 |a Federated learning with distributed fixed design quantum chips and quantum channels 
260 |b Cornell University Library, arXiv.org  |c Oct 9, 2024 
513 |a Working Paper 
520 3 |a The privacy in classical federated learning can be breached through the use of local gradient results combined with engineered queries to the clients. However, quantum communication channels are considered more secure because a measurement on the channel causes a loss of information, which can be detected by the sender. Therefore, the quantum version of federated learning can be used to provide better privacy. Additionally, sending an \(N\)-dimensional data vector through a quantum channel requires sending \(\log N\) entangled qubits, which can potentially provide efficiency if the data vector is utilized as quantum states. In this paper, we propose a quantum federated learning model in which fixed design quantum chips are operated based on the quantum states sent by a centralized server. Based on the incoming superposition states, the clients compute and then send their local gradients as quantum states to the server, where they are aggregated to update parameters. Since the server does not send model parameters, but instead sends the operator as a quantum state, the clients are not required to share the model. This allows for the creation of asynchronous learning models. In addition, the model is fed into client-side chips directly as a quantum state; therefore, it does not require measurements on the incoming quantum state to obtain model parameters in order to compute gradients. This can provide efficiency over models where the parameter vector is sent via classical or quantum channels and local gradients are obtained through the obtained values these parameters. 
653 |a Mathematical models 
653 |a Channels 
653 |a Servers 
653 |a Clients 
653 |a Parameters 
653 |a Privacy 
653 |a Qubits (quantum computing) 
773 0 |t arXiv.org  |g (Oct 9, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2918404157/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2401.13421