TY - GEN
T1 - DynFed
T2 - 33rd ACM International Conference on Multimedia, MM 2025
AU - He, Nan
AU - Chen, Yiming
AU - Jiang, Zheng
AU - Yang, Song
AU - Sun, Lifeng
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/10/27
Y1 - 2025/10/27
N2 - Federated Learning (FL) has become a powerful technique for collaborative model training across decentralized entities while preserving data privacy. Despite its potential, FL faces significant challenges, including communication overhead, resource heterogeneity, and data heterogeneity. Existing solutions fall short in addressing disparities in client resources and the errors introduced by direct model aggregation across heterogeneous clients. To tackle these issues, we propose DynFed, a novel federated learning framework that incorporates dynamic quantization bit-width allocation and multi-teacher knowledge distillation for model aggregation. DynFed dynamically adjusts quantization bit-widths to clients based on their resource heterogeneity, adapting these allocations according to variations in the local loss function during training. This adaptive quantization strategy optimizes resource utilization while preserving model performance. For model aggregation, DynFed utilizes a dynamic multi-teacher knowledge distillation approach, assigning the most suitable teacher model to each data sample based on a comprehensive evaluation score, thereby ensuring effective knowledge transfer even in the presence of quantization-induced errors. This method not only mitigates the negative effects of heterogeneous bit-widths but also leverages client model diversity to enhance the robustness of the global model. Extensive experimental results demonstrate the superiority of DynFed over state-of-the-art methods.
AB - Federated Learning (FL) has become a powerful technique for collaborative model training across decentralized entities while preserving data privacy. Despite its potential, FL faces significant challenges, including communication overhead, resource heterogeneity, and data heterogeneity. Existing solutions fall short in addressing disparities in client resources and the errors introduced by direct model aggregation across heterogeneous clients. To tackle these issues, we propose DynFed, a novel federated learning framework that incorporates dynamic quantization bit-width allocation and multi-teacher knowledge distillation for model aggregation. DynFed dynamically adjusts quantization bit-widths to clients based on their resource heterogeneity, adapting these allocations according to variations in the local loss function during training. This adaptive quantization strategy optimizes resource utilization while preserving model performance. For model aggregation, DynFed utilizes a dynamic multi-teacher knowledge distillation approach, assigning the most suitable teacher model to each data sample based on a comprehensive evaluation score, thereby ensuring effective knowledge transfer even in the presence of quantization-induced errors. This method not only mitigates the negative effects of heterogeneous bit-widths but also leverages client model diversity to enhance the robustness of the global model. Extensive experimental results demonstrate the superiority of DynFed over state-of-the-art methods.
KW - federated learning
KW - knowledge distillation
KW - quantization
UR - https://www.scopus.com/pages/publications/105024063180
U2 - 10.1145/3746027.3755451
DO - 10.1145/3746027.3755451
M3 - Conference contribution
AN - SCOPUS:105024063180
T3 - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
SP - 11844
EP - 11852
BT - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PB - Association for Computing Machinery, Inc
Y2 - 27 October 2025 through 31 October 2025
ER -