TY - GEN
T1 - Content-centric caching using deep reinforcement learning in mobile computing
AU - Wang, Cairong
AU - Gai, Keke
AU - Guo, Jinnan
AU - Zhu, Liehuang
AU - Zhang, Zijian
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - In era of Internet, the amount of the connected devices has been remarkably increasing along with the increment of the network-based service. Both service quality and user's experience are facing great impact from latency issue while a large volume of concurrent user requests are made in the context of mobile computing. Deploying caching techniques at base stations or edge nodes is an alternative for dealing with the latency time issue. However, traditional caching techniques, e.g. Least Recently Used (LRU) or Least Frequently Used (LFU), cannot efficiently resolve latency caused by the complex content-oriented popularity distribution. In this paper, we propose a Deep Reinforcement Learning (DPL)-based approach to make the caching storage adaptable for dynamic and complicated mobile networking environment. The proposed mechanism does not need priori knowledge of the popularity distribution, so that it has a higher-level adoptability and flexibility in practice, compared with LRU and LFU. Our evaluation also compares the proposed approach with other deep learning methods and the results have suggested that our approach has a higher accuracy.
AB - In era of Internet, the amount of the connected devices has been remarkably increasing along with the increment of the network-based service. Both service quality and user's experience are facing great impact from latency issue while a large volume of concurrent user requests are made in the context of mobile computing. Deploying caching techniques at base stations or edge nodes is an alternative for dealing with the latency time issue. However, traditional caching techniques, e.g. Least Recently Used (LRU) or Least Frequently Used (LFU), cannot efficiently resolve latency caused by the complex content-oriented popularity distribution. In this paper, we propose a Deep Reinforcement Learning (DPL)-based approach to make the caching storage adaptable for dynamic and complicated mobile networking environment. The proposed mechanism does not need priori knowledge of the popularity distribution, so that it has a higher-level adoptability and flexibility in practice, compared with LRU and LFU. Our evaluation also compares the proposed approach with other deep learning methods and the results have suggested that our approach has a higher accuracy.
KW - Actor-critic algorithm
KW - Content caching
KW - Deep reinforcement learning
KW - Mobile computing
UR - http://www.scopus.com/inward/record.url?scp=85068359294&partnerID=8YFLogxK
U2 - 10.1109/HPBDIS.2019.8735483
DO - 10.1109/HPBDIS.2019.8735483
M3 - Conference contribution
AN - SCOPUS:85068359294
T3 - 2019 International Conference on High Performance Big Data and Intelligent Systems, HPBD and IS 2019
SP - 1
EP - 6
BT - 2019 International Conference on High Performance Big Data and Intelligent Systems, HPBD and IS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 International Conference on High Performance Big Data and Intelligent Systems, HPBD and IS 2019
Y2 - 9 May 2019 through 11 May 2019
ER -