Accuracy-aware differential privacy in federated learning of large transformer models

Junyan Ouyang, Rui Han*, Xiaojiang Zuo, Yunlai Cheng, Chi Harold Liu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Federated learning with Differential privacy (DP-FL) allows distributed clients to collaboratively train a model by exchanging their model parameters with injected noises. Despite the great benefits in privacy protection, DP-FL still suffers from large noise that increases linearly with model size. Hence when applying large transformers in modern AI systems, DP-FL may cause severe accuracy degradation. The prior art either injects isotropic noises to all model parameters, or relies on empirical settings to vary noises injected in different model parts. In this paper, we propose AccurateDP to systematically leverage the distinct effects of noises on every unit of model accuracy to improve DP-FL performance. The key of AccurateDP is to support noise injection at multiple granularities to minimize accuracy variations in DP. Given a granularity and a privacy budget, AccurateDP further provides an automatic means to find the optimal noise injection setting and provides theoretical proofs for our approach. We implemented AccurateDP to support prevalent transformer models. Extensive evaluation against latest techniques shows AccurateDP increases accuracy by an average of 7.69% under the same privacy budget and gains more accuracy improvement (9.23%) when applied to large models.

源语言英语
文章编号103986
期刊Journal of Information Security and Applications
89
DOI
出版状态已出版 - 3月 2025

指纹

探究 'Accuracy-aware differential privacy in federated learning of large transformer models' 的科研主题。它们共同构成独一无二的指纹。

引用此

Ouyang, J., Han, R., Zuo, X., Cheng, Y., & Liu, C. H. (2025). Accuracy-aware differential privacy in federated learning of large transformer models. Journal of Information Security and Applications, 89, 文章 103986. https://doi.org/10.1016/j.jisa.2025.103986