TY - GEN
T1 - Label Inference Attacks Against Federated Unlearning
AU - Wang, Wei
AU - Tang, Xiangyun
AU - Wang, Yajie
AU - Lin, Yijing
AU - Zhang, Tao
AU - Shen, Meng
AU - Niyato, Dusit
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.
PY - 2026
Y1 - 2026
N2 - Federated Unlearning (FU) has emerged as a promising solution to respond to “the right to be forgotten” of clients, by allowing clients to erase their data from global models without compromising model performance. Unfortunately, researchers find that the parameter variations of models induced by FU expose clients’ data information, enabling attackers to infer the label of unlearning data, while label inference attacks against FU remain unexplored. In this paper, we introduce and analyze a new privacy threat against FU and propose a novel label inference attack, ULIA, which can infer unlearning data labels across three FU levels. To address the unique challenges of inferring labels via the models variations, we design a gradient-label mapping mechanism in ULIA that establishes a relationship between gradient variations and unlearning labels, enabling inferring labels on accumulated model variations. We evaluate ULIA on both IID and non-IID settings. Experimental results show that in the IID setting, ULIA achieves a 100% Attack Success Rate (ASR) under both class-level and client-level unlearning. Even when only 1% of a user’s local data is forgotten, ULIA still attains an ASR ranging from 93% to 62.3%.
AB - Federated Unlearning (FU) has emerged as a promising solution to respond to “the right to be forgotten” of clients, by allowing clients to erase their data from global models without compromising model performance. Unfortunately, researchers find that the parameter variations of models induced by FU expose clients’ data information, enabling attackers to infer the label of unlearning data, while label inference attacks against FU remain unexplored. In this paper, we introduce and analyze a new privacy threat against FU and propose a novel label inference attack, ULIA, which can infer unlearning data labels across three FU levels. To address the unique challenges of inferring labels via the models variations, we design a gradient-label mapping mechanism in ULIA that establishes a relationship between gradient variations and unlearning labels, enabling inferring labels on accumulated model variations. We evaluate ULIA on both IID and non-IID settings. Experimental results show that in the IID setting, ULIA achieves a 100% Attack Success Rate (ASR) under both class-level and client-level unlearning. Even when only 1% of a user’s local data is forgotten, ULIA still attains an ASR ranging from 93% to 62.3%.
KW - Federated Learning
KW - Federated Unlearning
KW - Gradient-Label Mapping
KW - Label Inference Attack
KW - Privacy Protection
UR - https://www.scopus.com/pages/publications/105022976779
U2 - 10.1007/978-981-95-3001-4_1
DO - 10.1007/978-981-95-3001-4_1
M3 - Conference contribution
AN - SCOPUS:105022976779
SN - 9789819530007
T3 - Lecture Notes in Computer Science
SP - 1
EP - 16
BT - Knowledge Science, Engineering and Management - 18th International Conference, KSEM 2025, Proceedings
A2 - Zhu, Tianqing
A2 - Zhou, Wanlei
A2 - Zhu, Congcong
PB - Springer Science and Business Media Deutschland GmbH
T2 - 18th International Conference on Knowledge Science, Engineering and Management KSEM 2025
Y2 - 4 August 2025 through 7 August 2025
ER -