Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks

Alireza Sadeghi*, Gang Wang, Georgios B. Giannakis

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

103 Citations (Scopus)

Abstract

Caching is envisioned to play a critical role in next-generation content delivery infrastructure, cellular networks, and Internet architectures. By smartly storing the most popular contents at the storage-enabled network entities during off-peak demand instances, caching can benefit both network infrastructure as well as end users, during on-peak periods. In this context, distributing the limited storage capacity across network entities calls for decentralized caching schemes. Many practical caching systems involve a parent caching node connected to multiple leaf nodes to serve user file requests. To model the two-way interactive influence between caching decisions at the parent and leaf nodes, a reinforcement learning (RL) framework is put forth. To handle the large continuous state space, a scalable deep RL approach is pursued. The novel approach relies on a hyper-deep Q-network to learn the Q-function, and thus the optimal caching policy, in an online fashion. Reinforcing the parent node with ability to learn-and-adapt to unknown policies of leaf nodes as well as spatio-temporal dynamic evolution of file requests, results in remarkable caching performance, as corroborated through numerical tests.

Original languageEnglish
Article number8807260
Pages (from-to)1024-1033
Number of pages10
JournalIEEE Transactions on Cognitive Communications and Networking
Volume5
Issue number4
DOIs
Publication statusPublished - Dec 2019
Externally publishedYes

Keywords

  • Caching
  • deep Q-network
  • deep RL
  • function approximation
  • next-generation networks

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks'. Together they form a unique fingerprint.

Cite this