RFVIR: A robust federated algorithm defending against Byzantine attacks

Yongkang Wang, Di Hua Zhai*, Yuanqing Xia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Federated learning (FL) is susceptible to Byzantine attacks due to its inherently distributed and privacy-preserving nature. Most model parameters-based defense methods become utterly ineffective under intense non-independent and identically distributed (Non-IID) scenarios. In this paper, we shift our focus from model's parameters to the specific behavior of the model on a dataset, and propose a robust algorithm named RFVIR to defend against Byzantine attack under FL setting. RFVIR primarily tests the feature representations of each local model on a virtual dataset, and then computes the gram feature matrix to capture the difference between Byzantine attackers and benign clients, and removes suspicious local models based on Median Absolute Deviation (MAD), and finally leverage clipping operation to further mitigate the effect of potential Byzantine attackers. Since RFVIR focuses on the specific behavior of the model rather than the model parameters, it applies to intense Non-IID scenarios. We conduct experiments on CIFAR-10, MNIST, and GTSRB datasets, considering five typical Byzantine attacks and various attack scenarios. The experimental results demonstrate that RFVIR can successfully defend against various Byzantine attacks and outperform the existing robust algorithms.

Original languageEnglish
Article number102251
JournalInformation Fusion
Volume105
DOIs
Publication statusPublished - May 2024

Keywords

  • Byzantine attack
  • Federated learning
  • Gram matrix
  • MAD
  • Robust
  • Virtual dataset

Fingerprint

Dive into the research topics of 'RFVIR: A robust federated algorithm defending against Byzantine attacks'. Together they form a unique fingerprint.

Cite this