Privacy-Preserving Rényi Layer-wise Budget Allocation against Gradient Leakage for Federated Learning

  • Leyu Shi
  • , Ying Gao*
  • , Chong Chen
  • , Siquan Huang
  • , Jiafeng Zhao
  • , Xiping Hu
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Federated learning (FL) is vulnerable to gradient-based privacy attacks, where malicious attackers reconstruct training data from exchanged gradients. While existing differential privacy (DP) defenses mitigate this, they often cause excessive additive noise due to the inequality scaling in the theoretical analyses, which degrades the model's utility or fail under adaptive attacks. To address this issue, we propose FedMSBA, a layer-wise privacy-preservation method that adaptively allocates privacy budgets via Rényi DP (RDP) and modified sensitivity. FedMSBA dynamically scales noise to model intricacies and adaptively choose the better applied DP mechanisms, which provides a tighter mathematical bound and finally prevents non-convergence while resisting reconstruction attacks. Experiments demonstrate superior privacy-utility trade-offs compared to state-of-the-art defenses. FedMSBA achieves an approximately 2% improvement in accuracy and a 5% enhancement in privacy preservation. Furthermore, FedMSBA's performance remains nearly unaffected by variations in the privacy budget ϵ and failure rate δ.

Original languageEnglish
JournalIEEE Transactions on Mobile Computing
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • Federated Learning
  • Gradient Clipping
  • Gradient Leakage Attack
  • Rényi Differential Privacy
  • Secure Aggregation
  • Security and Privacy

Fingerprint

Dive into the research topics of 'Privacy-Preserving Rényi Layer-wise Budget Allocation against Gradient Leakage for Federated Learning'. Together they form a unique fingerprint.

Cite this