TY - JOUR
T1 - RoPE
T2 - Defending against backdoor attacks in federated learning systems
AU - Wang, Yongkang
AU - Zhai, Di Hua
AU - Xia, Yuanqing
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/6/7
Y1 - 2024/6/7
N2 - Federated learning (FL) is vulnerable to backdoor attacks, which aim to cause the misclassification on samples with a specific backdoor. Most existing algorithms are restricted by some conditions, such as the data distribution across the joint clients, the number of attackers and some auxiliary information, thereby being limited in the practical FL. In this paper, we propose RoPE containing three parts: using principal component analysis to extract the important features of model gradients; leveraging expectation–maximization to separate malicious clients from benign ones in accordance to the important features; removing the potential malicious gradients within the selected cluster with Isolated Forest. RoPE requires no restricted assumptions during the training process. We evaluate the performance of RoPE on three image classification tasks under non-independent and identically distributed scenario (non-iid) against centralized backdoor attacks with various ratios of attackers and distributed backdoor attacks, respectively. We also evaluate the performance of RoPE against other backdoor attack scenarios, including independent and identically distributed (iid) scheme, elaborately designed attack schemes. The results show that RoPE can defend against these backdoor attacks and outperform the existing algorithms. In addition, we also explore the impact of different numbers of features on RoPE's performance and conduct ablation experiments.
AB - Federated learning (FL) is vulnerable to backdoor attacks, which aim to cause the misclassification on samples with a specific backdoor. Most existing algorithms are restricted by some conditions, such as the data distribution across the joint clients, the number of attackers and some auxiliary information, thereby being limited in the practical FL. In this paper, we propose RoPE containing three parts: using principal component analysis to extract the important features of model gradients; leveraging expectation–maximization to separate malicious clients from benign ones in accordance to the important features; removing the potential malicious gradients within the selected cluster with Isolated Forest. RoPE requires no restricted assumptions during the training process. We evaluate the performance of RoPE on three image classification tasks under non-independent and identically distributed scenario (non-iid) against centralized backdoor attacks with various ratios of attackers and distributed backdoor attacks, respectively. We also evaluate the performance of RoPE against other backdoor attack scenarios, including independent and identically distributed (iid) scheme, elaborately designed attack schemes. The results show that RoPE can defend against these backdoor attacks and outperform the existing algorithms. In addition, we also explore the impact of different numbers of features on RoPE's performance and conduct ablation experiments.
KW - Backdoor attack
KW - Expectation–maximization
KW - Federated learning
KW - Principal component analysis
KW - Robustness
UR - http://www.scopus.com/inward/record.url?scp=85188936254&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2024.111660
DO - 10.1016/j.knosys.2024.111660
M3 - Article
AN - SCOPUS:85188936254
SN - 0950-7051
VL - 293
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 111660
ER -