TY - JOUR
T1 - MUFTI
T2 - Multi-Domain Distillation-Based Heterogeneous Federated Continuous Learning
AU - Gai, Keke
AU - Wang, Zijun
AU - Yu, Jing
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© 2025 IEEE. All rights reserved.
PY - 2025
Y1 - 2025
N2 - Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.
AB - Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.
KW - continuous learning
KW - Heterogeneous federated learning
KW - knowledge distillation
UR - http://www.scopus.com/inward/record.url?scp=86000794931&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2025.3542246
DO - 10.1109/TIFS.2025.3542246
M3 - Article
AN - SCOPUS:86000794931
SN - 1556-6013
VL - 20
SP - 2721
EP - 2733
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -