A loss-based weighted aggregation method for federated learning with heterogeneous computing resources

  • Chao Yao
  • , Haochen Wang
  • , Fan Yang
  • , Yi Yang
  • , Zehua Guo*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated Learning (FL) faces challenges in model convergence and stability due to the heterogeneity of client resources, where different local batch sizes degrade global model performance. We propose FedLW, a dynamic client weighting approach that utilizes training loss values to assess model quality and assigns higher weights to clients with lower losses. This way enables the server to prioritize gradients from well-performing clients, take full advantage of different clients at different training stages, and improve the generalization ability of the global model. To validate the effectiveness of the method, experiments on MNIST, Fashion-MNIST, and CIFAR-10 show that FedLW improves the accuracy by up to 2.24% compared to standard FL aggregation methods.

Original languageEnglish
Title of host publicationAPNet 2025 - Proceedings of the 9th Asia-Pacific Workshop on Networking
PublisherAssociation for Computing Machinery, Inc
Pages307-309
Number of pages3
ISBN (Electronic)9798400714016
DOIs
Publication statusPublished - 6 Aug 2025
Event9th Asia-Pacific Workshop on Networking, APNet 2025 - Shanghai, China
Duration: 7 Aug 20258 Aug 2025

Publication series

NameAPNet 2025 - Proceedings of the 9th Asia-Pacific Workshop on Networking

Conference

Conference9th Asia-Pacific Workshop on Networking, APNet 2025
Country/TerritoryChina
CityShanghai
Period7/08/258/08/25

Keywords

  • Batch Size
  • Federated Learning
  • Heterogeneous
  • Weighted Aggregation

Fingerprint

Dive into the research topics of 'A loss-based weighted aggregation method for federated learning with heterogeneous computing resources'. Together they form a unique fingerprint.

Cite this