Abstract
Privacy-preserving federated learning allows many clients to collaboratively train a machine learning model by sharing encrypted models. Among them, mask-based schemes have been widely applied due to their efficiency advantages. Unfortunately, as the invisibility of an individual local model, such schemes are vulnerable to poisoning attacks by Byzantine clients. Current work lacks a practical method to detect Byzantine clients within the mask-based scheme. We propose FBFL, a flexible Byzantine-robust federated learning scheme with privacy-preserving. While protecting the privacy of the client, we implement a defense against Byzantine clients without relying on an individual masked local model. Specifically, we design a secure distance computation method based on the Pedersen commitment. The exponential Manhattan distance we design is utilized to compute the malicious score of the client in order to distinguish the benign model from the abnormal model. Since the malicious score is obtained utilizing only the commitment value and not the masked local model, our Byzantine-robust method can be flexibly combined with other mask-based privacy-preserving methods. Security analysis shows that FBFL is Byzantine-robust and can ensure the data security of clients. Experimental evaluation shows that the robustness and efficiency of FBFL are great.
| Original language | English |
|---|---|
| Journal | IEEE Transactions on Cloud Computing |
| DOIs | |
| Publication status | Accepted/In press - 2025 |
| Externally published | Yes |
Keywords
- Byzantine-robust
- Federated learning
- Flexible
- Poisoning defense
- Privacy-preserving