TY - JOUR
T1 - Privacy-Preserving Rényi Layer-wise Budget Allocation against Gradient Leakage for Federated Learning
AU - Shi, Leyu
AU - Gao, Ying
AU - Chen, Chong
AU - Huang, Siquan
AU - Zhao, Jiafeng
AU - Hu, Xiping
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) is vulnerable to gradient-based privacy attacks, where malicious attackers reconstruct training data from exchanged gradients. While existing differential privacy (DP) defenses mitigate this, they often cause excessive additive noise due to the inequality scaling in the theoretical analyses, which degrades the model's utility or fail under adaptive attacks. To address this issue, we propose FedMSBA, a layer-wise privacy-preservation method that adaptively allocates privacy budgets via Rényi DP (RDP) and modified sensitivity. FedMSBA dynamically scales noise to model intricacies and adaptively choose the better applied DP mechanisms, which provides a tighter mathematical bound and finally prevents non-convergence while resisting reconstruction attacks. Experiments demonstrate superior privacy-utility trade-offs compared to state-of-the-art defenses. FedMSBA achieves an approximately 2% improvement in accuracy and a 5% enhancement in privacy preservation. Furthermore, FedMSBA's performance remains nearly unaffected by variations in the privacy budget ϵ and failure rate δ.
AB - Federated learning (FL) is vulnerable to gradient-based privacy attacks, where malicious attackers reconstruct training data from exchanged gradients. While existing differential privacy (DP) defenses mitigate this, they often cause excessive additive noise due to the inequality scaling in the theoretical analyses, which degrades the model's utility or fail under adaptive attacks. To address this issue, we propose FedMSBA, a layer-wise privacy-preservation method that adaptively allocates privacy budgets via Rényi DP (RDP) and modified sensitivity. FedMSBA dynamically scales noise to model intricacies and adaptively choose the better applied DP mechanisms, which provides a tighter mathematical bound and finally prevents non-convergence while resisting reconstruction attacks. Experiments demonstrate superior privacy-utility trade-offs compared to state-of-the-art defenses. FedMSBA achieves an approximately 2% improvement in accuracy and a 5% enhancement in privacy preservation. Furthermore, FedMSBA's performance remains nearly unaffected by variations in the privacy budget ϵ and failure rate δ.
KW - Federated Learning
KW - Gradient Clipping
KW - Gradient Leakage Attack
KW - Rényi Differential Privacy
KW - Secure Aggregation
KW - Security and Privacy
UR - https://www.scopus.com/pages/publications/105018689248
U2 - 10.1109/TMC.2025.3618185
DO - 10.1109/TMC.2025.3618185
M3 - Article
AN - SCOPUS:105018689248
SN - 1536-1233
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
ER -