Adaptive Federated Learning on Non-IID Data With Resource Constraint

Jie Zhang, Song Guo, Zhihao Qu*, Deze Zeng, Yufeng Zhan, Qifeng Liu, Rajendra Akerkar

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

67 引用 (Scopus)

摘要

Federated learning (FL) has been widely recognized as a promising approach by enabling individual end-devices to cooperatively train a global model without exposing their own data. One of the key challenges in FL is the non-independent and identically distributed (Non-IID) data across the clients, which decreases the efficiency of stochastic gradient descent (SGD) based training process. Moreover, clients with different data distributions may cause bias to the global model update, resulting in a degraded model accuracy. To tackle the Non-IID problem in FL, we aim to optimize the local training process and global aggregation simultaneously. For local training, we analyze the effect of hyperparameters (e.g., the batch size, the number of local updates) on the training performance of FL. Guided by the toy example and theoretical analysis, we are motivated to mitigate the negative impacts incurred by Non-IID data via selecting a subset of participants and adaptively adjust their batch size. A deep reinforcement learning based approach has been proposed to adaptively control the training of local models and the phase of global aggregation. Extensive experiments on different datasets show that our method can improve the model accuracy by up to 30 percent, as compared to the state-of-the-art approaches.

源语言英语
页(从-至)1655-1667
页数13
期刊IEEE Transactions on Computers
71
7
DOI
出版状态已出版 - 1 7月 2022

指纹

探究 'Adaptive Federated Learning on Non-IID Data With Resource Constraint' 的科研主题。它们共同构成独一无二的指纹。

引用此