Abstract
Federated learning with Differential privacy (DP-FL) allows distributed clients to collaboratively train a model by exchanging their model parameters with injected noises. Despite the great benefits in privacy protection, DP-FL still suffers from large noise that increases linearly with model size. Hence when applying large transformers in modern AI systems, DP-FL may cause severe accuracy degradation. The prior art either injects isotropic noises to all model parameters, or relies on empirical settings to vary noises injected in different model parts. In this paper, we propose AccurateDP to systematically leverage the distinct effects of noises on every unit of model accuracy to improve DP-FL performance. The key of AccurateDP is to support noise injection at multiple granularities to minimize accuracy variations in DP. Given a granularity and a privacy budget, AccurateDP further provides an automatic means to find the optimal noise injection setting and provides theoretical proofs for our approach. We implemented AccurateDP to support prevalent transformer models. Extensive evaluation against latest techniques shows AccurateDP increases accuracy by an average of 7.69% under the same privacy budget and gains more accuracy improvement (9.23%) when applied to large models.
Original language | English |
---|---|
Article number | 103986 |
Journal | Journal of Information Security and Applications |
Volume | 89 |
DOIs | |
Publication status | Published - Mar 2025 |
Keywords
- Accuracy-aware noise injection
- Differential privacy
- Federated learning
- Model component
- Transformer