Enforcing Differential Privacy in Federated Learning via Long-Term Contribution Incentives

Xiangyun Tang, Luyao Peng*, Yu Weng*, Meng Shen, Liehuang Zhu, Robert H. Deng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients' data by adding DP noise to samples' gradients and has emerged as a de facto standard for data privacy in FL. However, the accuracy of global models in DP-based FL may be reduced significantly when rogue clients occur who deviate from the preset DP-based FL approaches and selfishly inject excessive DP noise beyond expectations, thereby applying a smaller privacy budget in the DP mechanism to ensure a higher level of security. Existing DP-based FL fails to prevent such attacks as they are imperceptible. Under the DP-based FL system and random Gaussian noise, the local model parameters of the rogue clients and the honest clients have identical distributions. In particular, the rogue local models show a low performance, but directly filtering out lower-performance local models compromises the generalizability of global models, as local models trained on scarce data also behave with low performance in the early epoch. In this paper, we propose ReFL, a novel privacy-preserving FL system that enforces DP and avoids the accuracy reduction of global models caused by excessive DP noise of rogue clients. Based on the observation that rogue local models with excessive DP noise and honest local models trained on scarce data have different performance patterns in long-term training epochs, we propose a long-term contribution incentives scheme to evaluate clients' reputations and identify rogue clients. Furthermore, we design a reputation-based aggregation to avoid the damage of rogue clients' models on the global model accuracy, based on the incentive reputation. Extensive experiments demonstrate ReFL guarantees the global model accuracy performance 0.77% - 81.71% higher than existing DP-based FL methods in the presence of rogue clients.

Original languageEnglish
Pages (from-to)3102-3115
Number of pages14
JournalIEEE Transactions on Information Forensics and Security
Volume20
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • differential privacy
  • Federated learning
  • privacy protection

Fingerprint

Dive into the research topics of 'Enforcing Differential Privacy in Federated Learning via Long-Term Contribution Incentives'. Together they form a unique fingerprint.

Cite this