TY - JOUR
T1 - Toward Reinforcement-Learning-Based Service Deployment of 5G Mobile Edge Computing with Request-Aware Scheduling
AU - Zhai, Yanlong
AU - Bao, Tianhong
AU - Zhu, Liehuang
AU - Shen, Meng
AU - Du, Xiaojiang
AU - Guizani, Mohsen
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2020/2
Y1 - 2020/2
N2 - 5G wireless network technology will not only significantly increase bandwidth but also introduce new features such as mMTC and URLLC. However, high request latency will remain a challenging problem even with 5G due to the massive requests generated by an increasing number of devices that require long travel distances to the services deployed in cloud centers. By pushing the services closer to the edge of the network, edge computing is recognized as a promising technology to reduce latency. However, properly deploying services among resource-constrained edge servers is an unsolved problem. In this article, we propose a deep reinforcement learning approach to preferably deploy the services to the edge servers with consideration of the request patterns and resource constraints of users, which have not been adequately explored. First, the system model and optimization objectives are formulated and investigated. Then the problem is modeled as a Markov decision process and solved using the Dueling-Deep Q-network algorithm. The experimental results, based on the evaluation of real-life mobile wireless datasets, show that this reinforcement learning approach could be applied to patterns of requests and improve performance.
AB - 5G wireless network technology will not only significantly increase bandwidth but also introduce new features such as mMTC and URLLC. However, high request latency will remain a challenging problem even with 5G due to the massive requests generated by an increasing number of devices that require long travel distances to the services deployed in cloud centers. By pushing the services closer to the edge of the network, edge computing is recognized as a promising technology to reduce latency. However, properly deploying services among resource-constrained edge servers is an unsolved problem. In this article, we propose a deep reinforcement learning approach to preferably deploy the services to the edge servers with consideration of the request patterns and resource constraints of users, which have not been adequately explored. First, the system model and optimization objectives are formulated and investigated. Then the problem is modeled as a Markov decision process and solved using the Dueling-Deep Q-network algorithm. The experimental results, based on the evaluation of real-life mobile wireless datasets, show that this reinforcement learning approach could be applied to patterns of requests and improve performance.
UR - http://www.scopus.com/inward/record.url?scp=85081694901&partnerID=8YFLogxK
U2 - 10.1109/MWC.001.1900298
DO - 10.1109/MWC.001.1900298
M3 - Article
AN - SCOPUS:85081694901
SN - 1536-1284
VL - 27
SP - 84
EP - 91
JO - IEEE Wireless Communications
JF - IEEE Wireless Communications
IS - 1
M1 - 9023928
ER -