RoPE: Defending against backdoor attacks in federated learning systems

Yongkang Wang, Di Hua Zhai*, Yuanqing Xia

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Federated learning (FL) is vulnerable to backdoor attacks, which aim to cause the misclassification on samples with a specific backdoor. Most existing algorithms are restricted by some conditions, such as the data distribution across the joint clients, the number of attackers and some auxiliary information, thereby being limited in the practical FL. In this paper, we propose RoPE containing three parts: using principal component analysis to extract the important features of model gradients; leveraging expectation–maximization to separate malicious clients from benign ones in accordance to the important features; removing the potential malicious gradients within the selected cluster with Isolated Forest. RoPE requires no restricted assumptions during the training process. We evaluate the performance of RoPE on three image classification tasks under non-independent and identically distributed scenario (non-iid) against centralized backdoor attacks with various ratios of attackers and distributed backdoor attacks, respectively. We also evaluate the performance of RoPE against other backdoor attack scenarios, including independent and identically distributed (iid) scheme, elaborately designed attack schemes. The results show that RoPE can defend against these backdoor attacks and outperform the existing algorithms. In addition, we also explore the impact of different numbers of features on RoPE's performance and conduct ablation experiments.

源语言英语
文章编号111660
期刊Knowledge-Based Systems
293
DOI
出版状态已出版 - 7 6月 2024

指纹

探究 'RoPE: Defending against backdoor attacks in federated learning systems' 的科研主题。它们共同构成独一无二的指纹。

引用此