Label Inference Attacks Against Federated Unlearning

  • Wei Wang
  • , Xiangyun Tang*
  • , Yajie Wang*
  • , Yijing Lin
  • , Tao Zhang
  • , Meng Shen
  • , Dusit Niyato
  • , Liehuang Zhu
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated Unlearning (FU) has emerged as a promising solution to respond to “the right to be forgotten” of clients, by allowing clients to erase their data from global models without compromising model performance. Unfortunately, researchers find that the parameter variations of models induced by FU expose clients’ data information, enabling attackers to infer the label of unlearning data, while label inference attacks against FU remain unexplored. In this paper, we introduce and analyze a new privacy threat against FU and propose a novel label inference attack, ULIA, which can infer unlearning data labels across three FU levels. To address the unique challenges of inferring labels via the models variations, we design a gradient-label mapping mechanism in ULIA that establishes a relationship between gradient variations and unlearning labels, enabling inferring labels on accumulated model variations. We evaluate ULIA on both IID and non-IID settings. Experimental results show that in the IID setting, ULIA achieves a 100% Attack Success Rate (ASR) under both class-level and client-level unlearning. Even when only 1% of a user’s local data is forgotten, ULIA still attains an ASR ranging from 93% to 62.3%.

Original languageEnglish
Title of host publicationKnowledge Science, Engineering and Management - 18th International Conference, KSEM 2025, Proceedings
EditorsTianqing Zhu, Wanlei Zhou, Congcong Zhu
PublisherSpringer Science and Business Media Deutschland GmbH
Pages1-16
Number of pages16
ISBN (Print)9789819530007
DOIs
Publication statusPublished - 2026
Externally publishedYes
Event18th International Conference on Knowledge Science, Engineering and Management KSEM 2025 - Macao, China
Duration: 4 Aug 20257 Aug 2025

Publication series

NameLecture Notes in Computer Science
Volume15919 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference18th International Conference on Knowledge Science, Engineering and Management KSEM 2025
Country/TerritoryChina
CityMacao
Period4/08/257/08/25

Keywords

  • Federated Learning
  • Federated Unlearning
  • Gradient-Label Mapping
  • Label Inference Attack
  • Privacy Protection

Fingerprint

Dive into the research topics of 'Label Inference Attacks Against Federated Unlearning'. Together they form a unique fingerprint.

Cite this