TY - JOUR
T1 - Caching-Enabled Computation Offloading in Multi-Region MEC Network via Deep Reinforcement Learning
AU - Yang, Song
AU - Liu, Jintian
AU - Zhang, Fei
AU - Li, Fan
AU - Chen, Xu
AU - Fu, Xiaoming
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2022/11/1
Y1 - 2022/11/1
N2 - With the rapid development of the Internet, more and more computing-intensive applications with high requirements on computing delay and energy consumption have emerged. Recently, the use of mobile-edge computing servers for auxiliary computing is considered as an effective way to reduce latency and energy consumption. In addition, applications such as autonomous driving will generate a large number of repetitive tasks. Using a cache to store the computational results of popular tasks can avoid the overhead caused by repetitive processing. In this article, we study the problem of computation offloading for users in multiple regions. The optimization goal is expressed as choosing offloading strategies and caching strategies to minimize the total delay and energy consumption of all regions. We first use the deep reinforcement learning (DRL) deep deterministic policy gradient (DDPG) framework to solve the problem of computational offloading in a single region. We also show the inefficiency of existing collaborative caching approaches in multiple regions, and propose a new collaborative caching algorithm (CCA) to improve the overall cache hit rate of the system. Finally, we integrate the DDPG and CCA algorithms to form a holistic efficient caching and offloading strategy for all regions. The simulation results show that the proposed algorithm can significantly improve the cache hit rate, and has an excellent performance in reducing the total system overhead.
AB - With the rapid development of the Internet, more and more computing-intensive applications with high requirements on computing delay and energy consumption have emerged. Recently, the use of mobile-edge computing servers for auxiliary computing is considered as an effective way to reduce latency and energy consumption. In addition, applications such as autonomous driving will generate a large number of repetitive tasks. Using a cache to store the computational results of popular tasks can avoid the overhead caused by repetitive processing. In this article, we study the problem of computation offloading for users in multiple regions. The optimization goal is expressed as choosing offloading strategies and caching strategies to minimize the total delay and energy consumption of all regions. We first use the deep reinforcement learning (DRL) deep deterministic policy gradient (DDPG) framework to solve the problem of computational offloading in a single region. We also show the inefficiency of existing collaborative caching approaches in multiple regions, and propose a new collaborative caching algorithm (CCA) to improve the overall cache hit rate of the system. Finally, we integrate the DDPG and CCA algorithms to form a holistic efficient caching and offloading strategy for all regions. The simulation results show that the proposed algorithm can significantly improve the cache hit rate, and has an excellent performance in reducing the total system overhead.
KW - Collaborative caching
KW - computation offloading
KW - deep reinforcement learning (DRL)
KW - edge computing
KW - user migration
UR - http://www.scopus.com/inward/record.url?scp=85130476352&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2022.3176289
DO - 10.1109/JIOT.2022.3176289
M3 - Article
AN - SCOPUS:85130476352
SN - 2327-4662
VL - 9
SP - 21086
EP - 21098
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 21
ER -