EGIA: An External Gradient Inversion Attack in Federated Learning

Haotian Liang, Youqi Li, Chuan Zhang*, Ximeng Liu, Liehuang Zhu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

12 引用 (Scopus)

摘要

Federated learning (FL) has achieved state-of-the-art performance in distributed learning tasks with privacy requirements. However, it has been discovered that FL is vulnerable to adversarial attacks. The typical gradient inversion attacks primarily focus on attempting to obtain the client's private input in a white-box manner, where the adversary is assumed to be either the client or the server. However, if both the clients and the server are honest and fully trusted, is the FL secure? In this paper, we propose a novel method called External Gradient Inversion Attack (EGIA) in the grey-box settings. Specifically, we concentrate on the point that public-shared gradients in FL are always transmitted through the intermediary nodes, which has been widely ignored. On this basis, we demonstrate that an external adversary can reconstruct the private input using gradients even if both the clients and the server are honest and fully trusted. We also provide a comprehensive theoretical analysis of the black-box attack scenario in which the adversary has only the gradients. We perform extensive experiments on multiple real-world datasets to test the effectiveness of EGIA. The outcomes of our experiments validate that the EGIA method is highly effective.

源语言英语
页(从-至)4984-4995
页数12
期刊IEEE Transactions on Information Forensics and Security
18
DOI
出版状态已出版 - 2023

指纹

探究 'EGIA: An External Gradient Inversion Attack in Federated Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此