AsyFed: Accelerated Federated Learning with Asynchronous Communication Mechanism

Zhixin Li, Chunpu Huang, Keke Gai*, Zhihui Lu, Jie Wu, Lulu Chen, Yangchuan Xu, Kim Kwang Raymond Choo

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

9 引用 (Scopus)

摘要

As a new distributed machine learning (ML) framework for privacy protection, federated learning (FL) enables substantial Internet of Things (IoT) devices (e.g., mobile phones, tablets, etc.) to participate in collaborative training of an ML model. FL can protect the data privacy of IoT devices without exposing their raw data. However, the diversity of IoT devices may degrade the overall training process due to the straggler issue. To tackle this problem, we propose a gear-based asynchronous FL (AsyFed) architecture. It adds a gear layer between the clients and the FL server as a mediator to store the model parameters. The key insight is that we group these clients with similar training abilities into the same gear. The clients within the same gear conduct synchronous training. These gears then communicate with the global FL server asynchronously. Besides, we propose a T-step mechanism to reduce the weight from the slow gear when they are communicating with the FL server. The extensive experiment evaluations indicate that AsyFed outperforms FedAvg (baseline synchronous FL scheme) and some state-of-the-art asynchronous FL methods in terms of training accuracy or speed under different data distributions. The only negligible overhead is that we leverage the extra layer (gear layer) to preserve part of the model parameters.

源语言英语
页(从-至)8670-8683
页数14
期刊IEEE Internet of Things Journal
10
10
DOI
出版状态已出版 - 15 5月 2023

指纹

探究 'AsyFed: Accelerated Federated Learning with Asynchronous Communication Mechanism' 的科研主题。它们共同构成独一无二的指纹。

引用此