Abstract
Federated learning (FL) has achieved state-of-the-art performance in distributed learning tasks with privacy requirements. However, it has been discovered that FL is vulnerable to adversarial attacks. The typical gradient inversion attacks primarily focus on attempting to obtain the client's private input in a white-box manner, where the adversary is assumed to be either the client or the server. However, if both the clients and the server are honest and fully trusted, is the FL secure? In this paper, we propose a novel method called External Gradient Inversion Attack (EGIA) in the grey-box settings. Specifically, we concentrate on the point that public-shared gradients in FL are always transmitted through the intermediary nodes, which has been widely ignored. On this basis, we demonstrate that an external adversary can reconstruct the private input using gradients even if both the clients and the server are honest and fully trusted. We also provide a comprehensive theoretical analysis of the black-box attack scenario in which the adversary has only the gradients. We perform extensive experiments on multiple real-world datasets to test the effectiveness of EGIA. The outcomes of our experiments validate that the EGIA method is highly effective.
Original language | English |
---|---|
Pages (from-to) | 4984-4995 |
Number of pages | 12 |
Journal | IEEE Transactions on Information Forensics and Security |
Volume | 18 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Federated learning
- black-box attack
- external adversary
- gradient inversion
- grey-box attack