A federated learning attack method based on edge collaboration via cloud

Jie Yang, Thar Baker*, Sukhpal Singh Gill, Xiaochuan Yang, Weifeng Han, Yuanzhang Li*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

5 引用 (Scopus)

摘要

Federated learning (FL) is widely used in edge-cloud collaborative training due to its distributed architecture and privacy-preserving properties without sharing local data. FLTrust, the most state-of-the-art FL defense method, is a federated learning defense system with trust guidance. However, we found that FLTrust is not very robust. Therefore, in the edge collaboration scenario, we mainly study the poisoning attack on the FLTrust defense system. Due to the aggregation rule, FLTrust, with trust guidance, the model updates of participants with a significant deviation from the root gradient direction will be eliminated, which makes the poisoning effect on the global model not obvious. To solve this problem, under the premise of not being deleted by the FLTrust aggregation rules, we construct malicious model updates that deviate from the trust gradient to the greatest extent to achieve model poisoning attacks. First, we utilize the rotation of high-dimensional vectors around axes to construct malicious vectors with fixed orientations. Second, the malicious vector is constructed by the gradient inversion method to achieve an efficient and fast attack. Finally, a method of optimizing random noise is used to construct a malicious vector with a fixed direction. Experimental results show that our attack method reduces the model accuracy by 20%, severely undermining the usability of the model. Attacks are also successful hundreds of times faster than the FLTrust adaptive attack method.

源语言英语
期刊Software - Practice and Experience
DOI
出版状态已接受/待刊 - 2022

指纹

探究 'A federated learning attack method based on edge collaboration via cloud' 的科研主题。它们共同构成独一无二的指纹。

引用此