Fluent: Round-efficient Secure Aggregation for Private Federated Learning

Salvato in:
Dettagli Bibliografici
Pubblicato in:arXiv.org (Mar 10, 2024), p. n/a
Autore principale: Li, Xincheng
Altri autori: Ning, Jianting, Geong Sen Poh, Leo Yu Zhang, Yin, Xinchun, Zhang, Tianwei
Pubblicazione:
Cornell University Library, arXiv.org
Soggetti:
Accesso online:Citation/Abstract
Full text outside of ProQuest
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 2955957937
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2955957937 
045 0 |b d20240310 
100 1 |a Li, Xincheng 
245 1 |a Fluent: Round-efficient Secure Aggregation for Private Federated Learning 
260 |b Cornell University Library, arXiv.org  |c Mar 10, 2024 
513 |a Working Paper 
520 3 |a Federated learning (FL) facilitates collaborative training of machine learning models among a large number of clients while safeguarding the privacy of their local datasets. However, FL remains susceptible to vulnerabilities such as privacy inference and inversion attacks. Single-server secure aggregation schemes were proposed to address these threats. Nonetheless, they encounter practical constraints due to their round and communication complexities. This work introduces Fluent, a round and communication-efficient secure aggregation scheme for private FL. Fluent has several improvements compared to state-of-the-art solutions like Bell et al. (CCS 2020) and Ma et al. (SP 2023): (1) it eliminates frequent handshakes and secret sharing operations by efficiently reusing the shares across multiple training iterations without leaking any private information; (2) it accomplishes both the consistency check and gradient unmasking in one logical step, thereby reducing another round of communication. With these innovations, Fluent achieves the fewest communication rounds (i.e., two in the collection phase) in the malicious server setting, in contrast to at least three rounds in existing schemes. This significantly minimizes the latency for geographically distributed clients; (3) Fluent also introduces Fluent-Dynamic with a participant selection algorithm and an alternative secret sharing scheme. This can facilitate dynamic client joining and enhance the system flexibility and scalability. We implemented Fluent and compared it with existing solutions. Experimental results show that Fluent improves the computational cost by at least 75% and communication overhead by at least 25% for normal clients. Fluent also reduces the communication overhead for the server at the expense of a marginal increase in computational cost. 
653 |a Geographical distribution 
653 |a Computing costs 
653 |a Algorithms 
653 |a Servers 
653 |a Clients 
653 |a Privacy 
653 |a Machine learning 
653 |a Federated learning 
653 |a Communication 
653 |a Computational efficiency 
700 1 |a Ning, Jianting 
700 1 |a Geong Sen Poh 
700 1 |a Leo Yu Zhang 
700 1 |a Yin, Xinchun 
700 1 |a Zhang, Tianwei 
773 0 |t arXiv.org  |g (Mar 10, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2955957937/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2403.06143