TY - GEN
T1 - A Deep Reinforcement Learning-based Task Scheduling Algorithm for Energy Efficiency in Data Centers
AU - Song, Penglei
AU - Chi, Ce
AU - Ji, Kaixuan
AU - Liu, Zhiyong
AU - Zhang, Fa
AU - Zhang, Shikui
AU - Qiu, Dehui
AU - Wan, Xiaohua
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7
Y1 - 2021/7
N2 - Cloud data centers provide end-users with a wide range of application scenarios, including scientific computing, smart grids, etc. The number and size of data centers have rapidly increased in recent years, which causes severe environmental problems and colossal power demand. Therefore, it is desirable to use a proper scheduling method to optimize resource usage and reduce energy consumption in a data center. However, it is rather difficult to design an effective and efficient task scheduling algorithm because of the dynamic and complex environment of data centers. This paper proposes a task scheduling algorithm, WSS, to optimize resource usage and reduce energy consumption based on a model-free deep reinforcement learning framework inspired by the Wolpertinger architecture. The proposed algorithm can handle the scheduling problem on a sizeable discrete action space, improve decision efficiency, and save the training convergence time. Meanwhile, the proposed algorithm based on Soft Actor-Critic is designed to improve the stability and exploration capability of WSS. Experiments based on real-world traces prove that WSS can reduce energy consumption by nearly 25% compared with the Deep Q-network task scheduling algorithm. Moreover, WSS can provide a short time of training convergence without increasing the average waiting time of tasks and achieve stable performance.
AB - Cloud data centers provide end-users with a wide range of application scenarios, including scientific computing, smart grids, etc. The number and size of data centers have rapidly increased in recent years, which causes severe environmental problems and colossal power demand. Therefore, it is desirable to use a proper scheduling method to optimize resource usage and reduce energy consumption in a data center. However, it is rather difficult to design an effective and efficient task scheduling algorithm because of the dynamic and complex environment of data centers. This paper proposes a task scheduling algorithm, WSS, to optimize resource usage and reduce energy consumption based on a model-free deep reinforcement learning framework inspired by the Wolpertinger architecture. The proposed algorithm can handle the scheduling problem on a sizeable discrete action space, improve decision efficiency, and save the training convergence time. Meanwhile, the proposed algorithm based on Soft Actor-Critic is designed to improve the stability and exploration capability of WSS. Experiments based on real-world traces prove that WSS can reduce energy consumption by nearly 25% compared with the Deep Q-network task scheduling algorithm. Moreover, WSS can provide a short time of training convergence without increasing the average waiting time of tasks and achieve stable performance.
KW - Cloud computing
KW - deep reinforcement learning
KW - energy efficiency
KW - task scheduling
KW - wolpertinger architecture
UR - http://www.scopus.com/inward/record.url?scp=85114960839&partnerID=8YFLogxK
U2 - 10.1109/ICCCN52240.2021.9522309
DO - 10.1109/ICCCN52240.2021.9522309
M3 - Conference contribution
AN - SCOPUS:85114960839
T3 - Proceedings - International Conference on Computer Communications and Networks, ICCCN
BT - 30th International Conference on Computer Communications and Networks, ICCCN 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 30th International Conference on Computer Communications and Networks, ICCCN 2021
Y2 - 19 July 2021 through 22 July 2021
ER -