TY - JOUR
T1 - FinBack
T2 - Infiltrating Backdoors into Gradient Compressors on Federated Learning
AU - Tang, Xiangyun
AU - Yang, Wen
AU - Peng, Luyao
AU - Shen, Meng
AU - Zhang, Tao
AU - Weng, Yu
AU - Kang, Jiawen
AU - Niyato, Dusit
N1 - Publisher Copyright:
© 2025 IEEE. All rights reserved,
PY - 2025
Y1 - 2025
N2 - —Federated Learning (FL) has emerged as a promising distributed machine learning paradigm that allows clients to jointly train a global model without sharing their raw training datasets. However, FL is vulnerable to backdoor attacks, where malicious clients inject specific backdoors into their local models to manipulate the global model’s outputs. Recent studies widely applied gradient compression to construct efficient and robust FL systems against backdoor attacks, but we argue that gradient compression cannot be seen as a reliable defense strategy against backdoor attacks. In this work, we systematically evaluate the effectiveness of gradient compression against backdoor attacks. The experimental results indicate that, in addition to the effectiveness of SignSGD in preventing backdoor injection without significantly reducing the accuracy of the global model, most gradient compression methods do not provide effective defenses against backdoor attacks. Furthermore, we develop a novel adaptive backdoor attack, named FinBack, that can effectively infiltrate the gradient compressor SignSGD and implant backdoors in FL, by inducing small weight changes on specific neurons that do not conflict with benign clients while avoiding counteraction by benign clients and perturbation triggers thereby ensuring the effectiveness and persistence of backdoors. FinBack encompasses two attack modes: FinBack with the server collusion and FinBackR without the server collusion. Extensive experiments demonstrate the effectiveness and persistence of the proposed attacks, which increases the Attack Success Rate (ASR) from 10% to over 90% in SignSGD, even with 1% of malicious clients.
AB - —Federated Learning (FL) has emerged as a promising distributed machine learning paradigm that allows clients to jointly train a global model without sharing their raw training datasets. However, FL is vulnerable to backdoor attacks, where malicious clients inject specific backdoors into their local models to manipulate the global model’s outputs. Recent studies widely applied gradient compression to construct efficient and robust FL systems against backdoor attacks, but we argue that gradient compression cannot be seen as a reliable defense strategy against backdoor attacks. In this work, we systematically evaluate the effectiveness of gradient compression against backdoor attacks. The experimental results indicate that, in addition to the effectiveness of SignSGD in preventing backdoor injection without significantly reducing the accuracy of the global model, most gradient compression methods do not provide effective defenses against backdoor attacks. Furthermore, we develop a novel adaptive backdoor attack, named FinBack, that can effectively infiltrate the gradient compressor SignSGD and implant backdoors in FL, by inducing small weight changes on specific neurons that do not conflict with benign clients while avoiding counteraction by benign clients and perturbation triggers thereby ensuring the effectiveness and persistence of backdoors. FinBack encompasses two attack modes: FinBack with the server collusion and FinBackR without the server collusion. Extensive experiments demonstrate the effectiveness and persistence of the proposed attacks, which increases the Attack Success Rate (ASR) from 10% to over 90% in SignSGD, even with 1% of malicious clients.
KW - Federated learning
KW - backdoor attacks
KW - privacy protection
UR - https://www.scopus.com/pages/publications/105022127310
U2 - 10.1109/TIFS.2025.3633157
DO - 10.1109/TIFS.2025.3633157
M3 - Article
AN - SCOPUS:105022127310
SN - 1556-6013
VL - 20
SP - 12460
EP - 12475
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -