Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning

Deshan Yang, Senlin Luo*, Jinjie Zhou, Limin Pan, Xiaonan Yang, Jiyuan Xing

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Federated learning systems encounter various security risks, including backdoor, inference and adversarial attacks. Backdoor attacks within this context generally require careful trigger sample design involving candidate selection and automated optimization. Previous methods randomly selected trigger candidates from training dataset, disrupting sample distribution and blurring boundaries among them, which adversely affected the main task accuracy. Moreover, these methods employed non-optimized handcrafted triggers, resulting in a weakened backdoor mapping relationship and lower attack success rates. In this work, we propose a flexible backdoor attack approach, Trigger Sample Selection and Optimization (TSSO), motivated by neural network classification patterns. TSSO employs autoencoders and locality-sensitive hashing to select trigger candidates at class boundaries for precise injection. Furthermore, it iteratively refines trigger representations via the global model and historical outcomes, establishing a robust mapping relationship. TSSO is evaluated on four classical datasets with non-IID settings, outperforming state-of-the-art methods by achieving higher attack success rate in fewer rounds, prolonging the backdoor effect. In scalability tests, even with the defense deployed, TSSO achieved the attack success rate of over 80% with only 4% malicious clients (a poisoning rate of 1/640).

Original languageEnglish
Article number119743
JournalInformation Sciences
Volume651
DOIs
Publication statusPublished - Dec 2023

Keywords

  • Backdoor attack
  • Deep learning
  • Federated learning
  • Poisoning attack
  • Sample selection
  • Trigger optimization

Fingerprint

Dive into the research topics of 'Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning'. Together they form a unique fingerprint.

Cite this