FinBack: Infiltrating Backdoors into Gradient Compressors on Federated Learning

  • Xiangyun Tang
  • , Wen Yang
  • , Luyao Peng
  • , Meng Shen
  • , Tao Zhang*
  • , Yu Weng
  • , Jiawen Kang
  • , Dusit Niyato
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

—Federated Learning (FL) has emerged as a promising distributed machine learning paradigm that allows clients to jointly train a global model without sharing their raw training datasets. However, FL is vulnerable to backdoor attacks, where malicious clients inject specific backdoors into their local models to manipulate the global model’s outputs. Recent studies widely applied gradient compression to construct efficient and robust FL systems against backdoor attacks, but we argue that gradient compression cannot be seen as a reliable defense strategy against backdoor attacks. In this work, we systematically evaluate the effectiveness of gradient compression against backdoor attacks. The experimental results indicate that, in addition to the effectiveness of SignSGD in preventing backdoor injection without significantly reducing the accuracy of the global model, most gradient compression methods do not provide effective defenses against backdoor attacks. Furthermore, we develop a novel adaptive backdoor attack, named FinBack, that can effectively infiltrate the gradient compressor SignSGD and implant backdoors in FL, by inducing small weight changes on specific neurons that do not conflict with benign clients while avoiding counteraction by benign clients and perturbation triggers thereby ensuring the effectiveness and persistence of backdoors. FinBack encompasses two attack modes: FinBack with the server collusion and FinBackR without the server collusion. Extensive experiments demonstrate the effectiveness and persistence of the proposed attacks, which increases the Attack Success Rate (ASR) from 10% to over 90% in SignSGD, even with 1% of malicious clients.

Original languageEnglish
Pages (from-to)12460-12475
Number of pages16
JournalIEEE Transactions on Information Forensics and Security
Volume20
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • Federated learning
  • backdoor attacks
  • privacy protection

Fingerprint

Dive into the research topics of 'FinBack: Infiltrating Backdoors into Gradient Compressors on Federated Learning'. Together they form a unique fingerprint.

Cite this