DynFed: Adaptive Federated Learning via Quantization-Aware Knowledge Distillation

  • Nan He
  • , Yiming Chen
  • , Zheng Jiang
  • , Song Yang
  • , Lifeng Sun*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated Learning (FL) has become a powerful technique for collaborative model training across decentralized entities while preserving data privacy. Despite its potential, FL faces significant challenges, including communication overhead, resource heterogeneity, and data heterogeneity. Existing solutions fall short in addressing disparities in client resources and the errors introduced by direct model aggregation across heterogeneous clients. To tackle these issues, we propose DynFed, a novel federated learning framework that incorporates dynamic quantization bit-width allocation and multi-teacher knowledge distillation for model aggregation. DynFed dynamically adjusts quantization bit-widths to clients based on their resource heterogeneity, adapting these allocations according to variations in the local loss function during training. This adaptive quantization strategy optimizes resource utilization while preserving model performance. For model aggregation, DynFed utilizes a dynamic multi-teacher knowledge distillation approach, assigning the most suitable teacher model to each data sample based on a comprehensive evaluation score, thereby ensuring effective knowledge transfer even in the presence of quantization-induced errors. This method not only mitigates the negative effects of heterogeneous bit-widths but also leverages client model diversity to enhance the robustness of the global model. Extensive experimental results demonstrate the superiority of DynFed over state-of-the-art methods.

Original languageEnglish
Title of host publicationMM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PublisherAssociation for Computing Machinery, Inc
Pages11844-11852
Number of pages9
ISBN (Electronic)9798400720352
DOIs
Publication statusPublished - 27 Oct 2025
Externally publishedYes
Event33rd ACM International Conference on Multimedia, MM 2025 - Dublin, Ireland
Duration: 27 Oct 202531 Oct 2025

Publication series

NameMM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025

Conference

Conference33rd ACM International Conference on Multimedia, MM 2025
Country/TerritoryIreland
CityDublin
Period27/10/2531/10/25

Keywords

  • federated learning
  • knowledge distillation
  • quantization

Fingerprint

Dive into the research topics of 'DynFed: Adaptive Federated Learning via Quantization-Aware Knowledge Distillation'. Together they form a unique fingerprint.

Cite this