Toward Reinforcement-Learning-Based Service Deployment of 5G Mobile Edge Computing with Request-Aware Scheduling

Yanlong Zhai, Tianhong Bao, Liehuang Zhu, Meng Shen, Xiaojiang Du, Mohsen Guizani

Research output: Contribution to journalArticlepeer-review

41 Citations (Scopus)

Abstract

5G wireless network technology will not only significantly increase bandwidth but also introduce new features such as mMTC and URLLC. However, high request latency will remain a challenging problem even with 5G due to the massive requests generated by an increasing number of devices that require long travel distances to the services deployed in cloud centers. By pushing the services closer to the edge of the network, edge computing is recognized as a promising technology to reduce latency. However, properly deploying services among resource-constrained edge servers is an unsolved problem. In this article, we propose a deep reinforcement learning approach to preferably deploy the services to the edge servers with consideration of the request patterns and resource constraints of users, which have not been adequately explored. First, the system model and optimization objectives are formulated and investigated. Then the problem is modeled as a Markov decision process and solved using the Dueling-Deep Q-network algorithm. The experimental results, based on the evaluation of real-life mobile wireless datasets, show that this reinforcement learning approach could be applied to patterns of requests and improve performance.

Original languageEnglish
Article number9023928
Pages (from-to)84-91
Number of pages8
JournalIEEE Wireless Communications
Volume27
Issue number1
DOIs
Publication statusPublished - Feb 2020

Fingerprint

Dive into the research topics of 'Toward Reinforcement-Learning-Based Service Deployment of 5G Mobile Edge Computing with Request-Aware Scheduling'. Together they form a unique fingerprint.

Cite this