TY - JOUR
T1 - SFLES
T2 - Shuffled differentially private federated learning with early-stopping strategy
AU - Li, Yanhui
AU - Huang, Chen
AU - Zhao, Yuxin
AU - Du, Xinjie
AU - Huang, Junqing
AU - Yuan, Ye
N1 - Publisher Copyright:
© 2025 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
PY - 2025
Y1 - 2025
N2 - Federated Learning (FL) allows multiple clients to collaboratively train a global model without sharing raw data, yet it remains susceptible to privacy attacks. The recently proposed shuffle model of differential privacy (DP) offers a promising solution by leveraging privacy amplification to achieve strong local privacy guarantees while maintaining high utility. However, existing approaches based on this model rely on conventional Gaussian or Laplace mechanisms, which introduce unbounded noise and risk significant data distortion. Furthermore, these methods typically exhibit inefficient privacy budget allocation and suffer from excessive communication overhead and computational costs imposed by fixed training rounds, ultimately degrading performance. To address these limitations, we present SFLES, a novel shuffled differentially private FL framework designed to robustly prevent privacy leakage while optimizing model utility. In particular, SFLES employs Top-k sparsification to compress local model updates and integrates an adaptive, layer-wise bounded noise mechanism based on a symmetric piecewise distribution for fine-grained noise injection. To enhance efficiency, we propose a novel directional similarity-aware aggregation strategy, which prioritizes updates with consistent directional trends, accelerating convergence under DP constraints. Additionally, SFLES incorporates a dynamic early-stopping strategy that tracks update conflict rates and global accuracy trends, dynamically terminating training upon convergence detection and reallocating residual privacy budgets to subsequent rounds for improved utility. Extensive evaluations on MNIST, Fashion-MNIST, and CIFAR-10 demonstrate that SFLES surpasses state-of-the-art alternatives in balancing privacy-utility trade-offs, convergence speed, and communication efficiency.
AB - Federated Learning (FL) allows multiple clients to collaboratively train a global model without sharing raw data, yet it remains susceptible to privacy attacks. The recently proposed shuffle model of differential privacy (DP) offers a promising solution by leveraging privacy amplification to achieve strong local privacy guarantees while maintaining high utility. However, existing approaches based on this model rely on conventional Gaussian or Laplace mechanisms, which introduce unbounded noise and risk significant data distortion. Furthermore, these methods typically exhibit inefficient privacy budget allocation and suffer from excessive communication overhead and computational costs imposed by fixed training rounds, ultimately degrading performance. To address these limitations, we present SFLES, a novel shuffled differentially private FL framework designed to robustly prevent privacy leakage while optimizing model utility. In particular, SFLES employs Top-k sparsification to compress local model updates and integrates an adaptive, layer-wise bounded noise mechanism based on a symmetric piecewise distribution for fine-grained noise injection. To enhance efficiency, we propose a novel directional similarity-aware aggregation strategy, which prioritizes updates with consistent directional trends, accelerating convergence under DP constraints. Additionally, SFLES incorporates a dynamic early-stopping strategy that tracks update conflict rates and global accuracy trends, dynamically terminating training upon convergence detection and reallocating residual privacy budgets to subsequent rounds for improved utility. Extensive evaluations on MNIST, Fashion-MNIST, and CIFAR-10 demonstrate that SFLES surpasses state-of-the-art alternatives in balancing privacy-utility trade-offs, convergence speed, and communication efficiency.
KW - Early stopping
KW - Federated learning
KW - Privacy amplification
KW - Shuffled differential privacy
KW - Top-ksparsification
UR - https://www.scopus.com/pages/publications/105020595478
U2 - 10.1016/j.eswa.2025.129970
DO - 10.1016/j.eswa.2025.129970
M3 - Article
AN - SCOPUS:105020595478
SN - 0957-4174
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 129970
ER -