Clean-label poisoning attacks on federated learning for IoT

Jie Yang, Jun Zheng, Thar Baker*, Shuai Tang, Yu an Tan, Quanxin Zhang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

7 引用 (Scopus)

摘要

Federated Learning (FL) is suitable for the application scenarios of distributed edge collaboration of the Internet of Things (IoT). It can provide data security and privacy, which is why it is widely used in the IoT applications such as Industrial IoT (IIoT). Latest research shows that the federated learning framework is vulnerable to poisoning attacks in the case of an active attack by the adversary. However, the existing backdoor attack methods are easy to be detected by the defence methods. To address this challenge, we focus on edge-cloud synergistic FL clean-label attacks. Unlike common backdoor attack, to ensure the attack's concealment, we add a small perturbation to realize the clean label attack by judging the cosine similarity between the gradient of the adversarial loss and the gradient of the normal training loss. In order to improve the attack success rate and robustness, the attack is implemented when the global model is about to converge. The experimental results verified that 1% of poisoned data could make an attack successful with a high probability. Our method maintains stealth while performing model poisoning attacks, and the average Peak Signal-to-Noise Ratio (PSNR) of poisoning images reaches over 30 dB, and the average Structural SIMilarity (SSIM) is close to 0.93. Most importantly, our attack method can bypass the Byzantine aggregation defence.

源语言英语
文章编号e13161
期刊Expert Systems
40
5
DOI
出版状态已出版 - 6月 2023

指纹

探究 'Clean-label poisoning attacks on federated learning for IoT' 的科研主题。它们共同构成独一无二的指纹。

引用此

Yang, J., Zheng, J., Baker, T., Tang, S., Tan, Y. A., & Zhang, Q. (2023). Clean-label poisoning attacks on federated learning for IoT. Expert Systems, 40(5), 文章 e13161. https://doi.org/10.1111/exsy.13161