Federated Long-Tailed Learning by Retraining the Biased Classifier with Prototypes

Yang Li, Kan Li*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

1 引用 (Scopus)

摘要

Federated learning is a privacy-preserving framework that collaboratively trains the global model without sharing raw data among clients. However, one significant issue encountered in federated learning is that biased classifiers affect the classification performance of the global model, especially when training on long-tailed data. Retraining the classifier on balanced datasets requires sharing the client’s information and poses the risk of privacy leakage. We propose a method for retraining the biased classifier using prototypes, that leverage the comparison of distances between local and global prototypes to guide the local training process. We conduct experiments on CIFAR-10-LT and CIFAR-100-LT, and our approach outperforms the accuracy of baseline methods, with accuracy improvements of up to 10%.

源语言英语
主期刊名Frontiers in Cyber Security - 6th International Conference, FCS 2023, Revised Selected Papers
编辑Haomiao Yang, Rongxing Lu
出版商Springer Science and Business Media Deutschland GmbH
575-585
页数11
ISBN(印刷版)9789819993307
DOI
出版状态已出版 - 2024
活动6th International Conference on Frontiers in Cyber Security, FCS 2023 - Chengdu, 中国
期限: 21 8月 202323 8月 2023

出版系列

姓名Communications in Computer and Information Science
1992
ISSN(印刷版)1865-0929
ISSN(电子版)1865-0937

会议

会议6th International Conference on Frontiers in Cyber Security, FCS 2023
国家/地区中国
Chengdu
时期21/08/2323/08/23

指纹

探究 'Federated Long-Tailed Learning by Retraining the Biased Classifier with Prototypes' 的科研主题。它们共同构成独一无二的指纹。

引用此