TY - GEN
T1 - Federated Learning for Assigning Weights to Clients on Long-Tailed Data
AU - Li, Yang
AU - Li, Kan
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Federated learning enables multiple clients to collaboratively train a shared model without transmitting their data. Although this novel approach offers significant advantages in data privacy protection, the variations in data distribution among clients can lead to inconsistencies in model updates, particularly in long-tailed data, which prominently affect the model's ability to learn generalizable features essential for enhancing local model performance. In this study, we propose a novel re-weighting federated learning method, which incorporates a dynamic weight allocation mechanism aimed at balancing the local model updates from each client with the aggregation of the global model during training. Specifically, we employ balanced resampling locally at each client to rectify biases and perform cluster clients based on feature similarity, assigning weights appropriately. This strategy not only strengthens the model's capacity to learn cross-client generalizable features but also minimizes the divergence between local models and the global model. The empirical results on the MNIST-LT and EMNIST-LT datasets demonstrate that our method outperforms baseline approaches, revealing key factors behind its effectiveness.
AB - Federated learning enables multiple clients to collaboratively train a shared model without transmitting their data. Although this novel approach offers significant advantages in data privacy protection, the variations in data distribution among clients can lead to inconsistencies in model updates, particularly in long-tailed data, which prominently affect the model's ability to learn generalizable features essential for enhancing local model performance. In this study, we propose a novel re-weighting federated learning method, which incorporates a dynamic weight allocation mechanism aimed at balancing the local model updates from each client with the aggregation of the global model during training. Specifically, we employ balanced resampling locally at each client to rectify biases and perform cluster clients based on feature similarity, assigning weights appropriately. This strategy not only strengthens the model's capacity to learn cross-client generalizable features but also minimizes the divergence between local models and the global model. The empirical results on the MNIST-LT and EMNIST-LT datasets demonstrate that our method outperforms baseline approaches, revealing key factors behind its effectiveness.
KW - Deep Learning
KW - Federated Learning
KW - Image Classification
KW - Long-Tailed Data
UR - http://www.scopus.com/inward/record.url?scp=85201231781&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-5666-7_37
DO - 10.1007/978-981-97-5666-7_37
M3 - Conference contribution
AN - SCOPUS:85201231781
SN - 9789819756650
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 438
EP - 449
BT - Advanced Intelligent Computing Technology and Applications - 20th International Conference, ICIC 2024, Proceedings
A2 - Huang, De-Shuang
A2 - Pan, Yijie
A2 - Zhang, Chuanlei
PB - Springer Science and Business Media Deutschland GmbH
T2 - 20th International Conference on Intelligent Computing, ICIC 2024
Y2 - 5 August 2024 through 8 August 2024
ER -