RoPE: Defending against backdoor attacks in federated learning systems

Yongkang Wang, Di Hua Zhai*, Yuanqing Xia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Federated learning (FL) is vulnerable to backdoor attacks, which aim to cause the misclassification on samples with a specific backdoor. Most existing algorithms are restricted by some conditions, such as the data distribution across the joint clients, the number of attackers and some auxiliary information, thereby being limited in the practical FL. In this paper, we propose RoPE containing three parts: using principal component analysis to extract the important features of model gradients; leveraging expectation–maximization to separate malicious clients from benign ones in accordance to the important features; removing the potential malicious gradients within the selected cluster with Isolated Forest. RoPE requires no restricted assumptions during the training process. We evaluate the performance of RoPE on three image classification tasks under non-independent and identically distributed scenario (non-iid) against centralized backdoor attacks with various ratios of attackers and distributed backdoor attacks, respectively. We also evaluate the performance of RoPE against other backdoor attack scenarios, including independent and identically distributed (iid) scheme, elaborately designed attack schemes. The results show that RoPE can defend against these backdoor attacks and outperform the existing algorithms. In addition, we also explore the impact of different numbers of features on RoPE's performance and conduct ablation experiments.

Original languageEnglish
Article number111660
JournalKnowledge-Based Systems
Volume293
DOIs
Publication statusPublished - 7 Jun 2024

Keywords

  • Backdoor attack
  • Expectation–maximization
  • Federated learning
  • Principal component analysis
  • Robustness

Fingerprint

Dive into the research topics of 'RoPE: Defending against backdoor attacks in federated learning systems'. Together they form a unique fingerprint.

Cite this