TY - JOUR
T1 - L4L
T2 - Experience-Driven Computational Resource Control in Federated Learning
AU - Zhan, Yufeng
AU - Li, Peng
AU - Wu, Leijie
AU - Guo, Song
N1 - Publisher Copyright:
© 1968-2012 IEEE.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - As the large-scale deployment of machine learning applications, there is much research attention on exploiting a vast amount of data stored on mobile clients. To preserve data privacy, federated learning has been proposed to enable large-scale machine learning by massive clients without exposing raw data. Existing works of federated learning struggle for accelerating the learning process, but ignore the energy efficiency that is critical for resource-constrained clients. In this article, we propose to improve the energy efficiency of federated learning by lowering CPU cycle frequencies of clients who are faster in the training group. Based on this idea, we formulate an optimization problem aiming to minimize the total system cost defined as a weighted sum of learning time and energy consumption. Due to the hardness of the formulated optimization problem and unpredictability of network quality, we propose L4L (Learning for Learning), an experience-driven computational resource control approach based on the deep reinforcement learning, which can derive the near-optimal solution with only the clients' bandwidth information in the previous training rounds. We conduct the experiments using both real-world traces and synthetic traces to evaluate the proposed L4L approach. The results demonstrate the superiority of L4L as compared with the state-of-the-art solutions.
AB - As the large-scale deployment of machine learning applications, there is much research attention on exploiting a vast amount of data stored on mobile clients. To preserve data privacy, federated learning has been proposed to enable large-scale machine learning by massive clients without exposing raw data. Existing works of federated learning struggle for accelerating the learning process, but ignore the energy efficiency that is critical for resource-constrained clients. In this article, we propose to improve the energy efficiency of federated learning by lowering CPU cycle frequencies of clients who are faster in the training group. Based on this idea, we formulate an optimization problem aiming to minimize the total system cost defined as a weighted sum of learning time and energy consumption. Due to the hardness of the formulated optimization problem and unpredictability of network quality, we propose L4L (Learning for Learning), an experience-driven computational resource control approach based on the deep reinforcement learning, which can derive the near-optimal solution with only the clients' bandwidth information in the previous training rounds. We conduct the experiments using both real-world traces and synthetic traces to evaluate the proposed L4L approach. The results demonstrate the superiority of L4L as compared with the state-of-the-art solutions.
KW - Federated learning
KW - deep reinforcement learning
KW - experience-driven
UR - http://www.scopus.com/inward/record.url?scp=85103302838&partnerID=8YFLogxK
U2 - 10.1109/TC.2021.3068219
DO - 10.1109/TC.2021.3068219
M3 - Article
AN - SCOPUS:85103302838
SN - 0018-9340
VL - 71
SP - 971
EP - 983
JO - IEEE Transactions on Computers
JF - IEEE Transactions on Computers
IS - 4
ER -