Abstract
Federated learning (FL) is susceptible to Byzantine attacks due to its inherently distributed and privacy-preserving nature. Most model parameters-based defense methods become utterly ineffective under intense non-independent and identically distributed (Non-IID) scenarios. In this paper, we shift our focus from model's parameters to the specific behavior of the model on a dataset, and propose a robust algorithm named RFVIR to defend against Byzantine attack under FL setting. RFVIR primarily tests the feature representations of each local model on a virtual dataset, and then computes the gram feature matrix to capture the difference between Byzantine attackers and benign clients, and removes suspicious local models based on Median Absolute Deviation (MAD), and finally leverage clipping operation to further mitigate the effect of potential Byzantine attackers. Since RFVIR focuses on the specific behavior of the model rather than the model parameters, it applies to intense Non-IID scenarios. We conduct experiments on CIFAR-10, MNIST, and GTSRB datasets, considering five typical Byzantine attacks and various attack scenarios. The experimental results demonstrate that RFVIR can successfully defend against various Byzantine attacks and outperform the existing robust algorithms.
Original language | English |
---|---|
Article number | 102251 |
Journal | Information Fusion |
Volume | 105 |
DOIs | |
Publication status | Published - May 2024 |
Keywords
- Byzantine attack
- Federated learning
- Gram matrix
- MAD
- Robust
- Virtual dataset