Fed-AugMix: Balancing Privacy and Utility via Data Augmentation

Wedi'i Gadw mewn:
Manylion Llyfryddiaeth
Cyhoeddwyd yn:arXiv.org (Dec 18, 2024), p. n/a
Prif Awdur: Li, Haoyang
Awduron Eraill: Chen, Wei, Zhang, Xiaojin
Cyhoeddwyd:
Cornell University Library, arXiv.org
Pynciau:
Mynediad Ar-lein:Citation/Abstract
Full text outside of ProQuest
Tagiau: Ychwanegu Tag
Dim Tagiau, Byddwch y cyntaf i dagio'r cofnod hwn!

MARC

LEADER 00000nab a2200000uu 4500
001 3147264598
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3147264598 
045 0 |b d20241218 
100 1 |a Li, Haoyang 
245 1 |a Fed-AugMix: Balancing Privacy and Utility via Data Augmentation 
260 |b Cornell University Library, arXiv.org  |c Dec 18, 2024 
513 |a Working Paper 
520 3 |a Gradient leakage attacks pose a significant threat to the privacy guarantees of federated learning. While distortion-based protection mechanisms are commonly employed to mitigate this issue, they often lead to notable performance degradation. Existing methods struggle to preserve model performance while ensuring privacy. To address this challenge, we propose a novel data augmentation-based framework designed to achieve a favorable privacy-utility trade-off, with the potential to enhance model performance in certain cases. Our framework incorporates the AugMix algorithm at the client level, enabling data augmentation with controllable severity. By integrating the Jensen-Shannon divergence into the loss function, we embed the distortion introduced by AugMix into the model gradients, effectively safeguarding privacy against deep leakage attacks. Moreover, the JS divergence promotes model consistency across different augmentations of the same image, enhancing both robustness and performance. Extensive experiments on benchmark datasets demonstrate the effectiveness and stability of our method in protecting privacy. Furthermore, our approach maintains, and in some cases improves, model performance, showcasing its ability to achieve a robust privacy-utility trade-off. 
653 |a Distortion 
653 |a Data augmentation 
653 |a Algorithms 
653 |a Controllability 
653 |a Performance degradation 
653 |a Privacy 
653 |a Machine learning 
653 |a Federated learning 
653 |a Leakage 
653 |a Tradeoffs 
653 |a Stability augmentation 
700 1 |a Chen, Wei 
700 1 |a Zhang, Xiaojin 
773 0 |t arXiv.org  |g (Dec 18, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3147264598/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.13818