TY - GEN
T1 - Experience-Driven Computational Resource Allocation of Federated Learning by Deep Reinforcement Learning
AU - Zhan, Yufeng
AU - Li, Peng
AU - Guo, Song
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - Federated learning is promising in enabling large-scale machine learning by massive mobile devices without exposing the raw data of users with strong privacy concerns. Existing work of federated learning struggles for accelerating the learning process, but ignores the energy efficiency that is critical for resource-constrained mobile devices. In this paper, we propose to improve the energy efficiency of federated learning by lowering CPU-cycle frequency of mobile devices who are faster in the training group. Since all the devices are synchronized by iterations, the federated learning speed is preserved as long as they complete the training before the slowest device in each iteration. Based on this idea, we formulate an optimization problem aiming to minimize the total system cost that is defined as a weighted sum of training time and energy consumption. Due to the hardness of nonlinear constraints and unawareness of network quality, we design an experience-driven algorithm based on the Deep Reinforcement Learning (DRL), which can converge to the near-optimal solution without knowledge of network quality. Experiments on a small-scale testbed and large-scale simulations are conducted to evaluate our proposed algorithm. The results show that it outperforms the start-of-the-art by 40% at most.
AB - Federated learning is promising in enabling large-scale machine learning by massive mobile devices without exposing the raw data of users with strong privacy concerns. Existing work of federated learning struggles for accelerating the learning process, but ignores the energy efficiency that is critical for resource-constrained mobile devices. In this paper, we propose to improve the energy efficiency of federated learning by lowering CPU-cycle frequency of mobile devices who are faster in the training group. Since all the devices are synchronized by iterations, the federated learning speed is preserved as long as they complete the training before the slowest device in each iteration. Based on this idea, we formulate an optimization problem aiming to minimize the total system cost that is defined as a weighted sum of training time and energy consumption. Due to the hardness of nonlinear constraints and unawareness of network quality, we design an experience-driven algorithm based on the Deep Reinforcement Learning (DRL), which can converge to the near-optimal solution without knowledge of network quality. Experiments on a small-scale testbed and large-scale simulations are conducted to evaluate our proposed algorithm. The results show that it outperforms the start-of-the-art by 40% at most.
KW - deep reinforcement learning
KW - experience-driven
KW - federated learning
UR - http://www.scopus.com/inward/record.url?scp=85088893807&partnerID=8YFLogxK
U2 - 10.1109/IPDPS47924.2020.00033
DO - 10.1109/IPDPS47924.2020.00033
M3 - Conference contribution
AN - SCOPUS:85088893807
T3 - Proceedings - 2020 IEEE 34th International Parallel and Distributed Processing Symposium, IPDPS 2020
SP - 234
EP - 243
BT - Proceedings - 2020 IEEE 34th International Parallel and Distributed Processing Symposium, IPDPS 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 34th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2020
Y2 - 18 May 2020 through 22 May 2020
ER -