Federated deep long-tailed learning: A survey

Kan Li*, Yang Li, Ji Zhang, Xin Liu, Zhichao Ma

*Corresponding author for this work

Research output: Contribution to journalShort surveypeer-review

1 Citation (Scopus)

Abstract

The federated learning privacy-preserving framework has achieved fruitful results in training deep models across clients. This survey aims to provide a systematic overview of federated deep long-tailed learning. We analyze the problems of federated deep long-tailed learning of class imbalance/missing, different long-tailed distributions, and biased training, and summarize the current approaches that fall into the following three categories: information enhancement, model component optimization, and algorithm-based calibration. Meanwhile, we also sort out the representative open-source datasets for different tasks. We conduct abundant experiments on CIFAR-10/100-LT using LeNet-5/ResNet-8/ResNet-34 and evaluate the model performance with multiple metrics. We also consider a text classification task and evaluate the performance of multiple methods using LSTM on the 20NewsGroups-LT. We discuss the challenges posed by data heterogeneity, model heterogeneity, fairness, and security, and identify future research directions for the follow-up studies.

Original languageEnglish
Article number127906
JournalNeurocomputing
Volume595
DOIs
Publication statusPublished - 28 Aug 2024

Keywords

  • Agnostic distribution
  • Deep learning
  • Federated learning
  • Long-tailed distribution

Fingerprint

Dive into the research topics of 'Federated deep long-tailed learning: A survey'. Together they form a unique fingerprint.

Cite this