RFVIR: A robust federated algorithm defending against Byzantine attacks

Yongkang Wang, Di Hua Zhai*, Yuanqing Xia

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Federated learning (FL) is susceptible to Byzantine attacks due to its inherently distributed and privacy-preserving nature. Most model parameters-based defense methods become utterly ineffective under intense non-independent and identically distributed (Non-IID) scenarios. In this paper, we shift our focus from model's parameters to the specific behavior of the model on a dataset, and propose a robust algorithm named RFVIR to defend against Byzantine attack under FL setting. RFVIR primarily tests the feature representations of each local model on a virtual dataset, and then computes the gram feature matrix to capture the difference between Byzantine attackers and benign clients, and removes suspicious local models based on Median Absolute Deviation (MAD), and finally leverage clipping operation to further mitigate the effect of potential Byzantine attackers. Since RFVIR focuses on the specific behavior of the model rather than the model parameters, it applies to intense Non-IID scenarios. We conduct experiments on CIFAR-10, MNIST, and GTSRB datasets, considering five typical Byzantine attacks and various attack scenarios. The experimental results demonstrate that RFVIR can successfully defend against various Byzantine attacks and outperform the existing robust algorithms.

源语言英语
文章编号102251
期刊Information Fusion
105
DOI
出版状态已出版 - 5月 2024

指纹

探究 'RFVIR: A robust federated algorithm defending against Byzantine attacks' 的科研主题。它们共同构成独一无二的指纹。

引用此