MUFTI: Multi-Domain Distillation-Based Heterogeneous Federated Continuous Learning

Keke Gai, Zijun Wang, Jing Yu*, Liehuang Zhu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.

Original languageEnglish
Pages (from-to)2721-2733
Number of pages13
JournalIEEE Transactions on Information Forensics and Security
Volume20
DOIs
Publication statusPublished - 2025

Keywords

  • continuous learning
  • Heterogeneous federated learning
  • knowledge distillation

Cite this