Abstract
Machine unlearning is a critical technique in machine learning for protecting data security, aiming to selectively remove the influence of specific data from models while preserving overall model performance. However, conventional unlearning methods typically rely on full model retraining, which incurs high computational costs and lacks adaptability in real-world scenarios. This paper introduces a new framework, federated meta-unlearning (FMU), tailored for federated learning environments by integrating the precision of machine unlearning with the flexibility of meta-learning. Built upon the model-agnostic meta-learning (MAML) algorithm, FMU incorporates hierarchical model decomposition and a weighted aggregation mechanism. This enables efficient erasure of targeted data traces while minimizing the impact on global model performance. The core idea is to decompose the global federated model into independently editable component modules. Upon receiving an unlearning request, targeted parameter adjustment is applied to selectively fine-tune or remove specific modules, followed by weighted aggregation to ensure model integrity and adaptability. Extensive empirical evaluations on diverse datasets and cross-modal tasks demonstrate that FMU significantly reduces computational overhead compared to four prevalent unlearning methods (traditional retraining, MAML-based retraining, SISA unlearning, and gradient-based unlearning), while maintaining high model performance and compliance with privacy standards. Importantly, FMU enhances model adaptability to new data and tasks, validating its practical value in dynamic, privacy-sensitive environments.
| Translated title of the contribution | Federated meta unlearning based on model decomposition and weighted aggregation |
|---|---|
| Original language | Chinese (Traditional) |
| Pages (from-to) | 2722-2740 |
| Number of pages | 19 |
| Journal | Scientia Sinica Informationis |
| Volume | 55 |
| Issue number | 11 |
| DOIs | |
| Publication status | Published - 1 Nov 2025 |
| Externally published | Yes |