Abstract
To alleviate the ever-increasing data demands, edge caching plays a crucial role in improving the performance of system, especially in data-intensive applications. Previous works mainly focus the caching policy over reliable channels. For unreliable channel scenarios, the system performance is jointly affected by the user preference and the channel reliability, whereas both the user preference and the reliability are unknown commonly. A high retrieval cost may be incurred on unreliable channels even when the requested content is in the nearby cache. To solve the issues mentioned above, we jointly optimize the service scheduling policy and the content caching policy in this paper. We propose a maximal reward priority (MRP) policy to serve user requests, and a collaborative multi-agent actor critic (CMA-AC) policy to update the local cache. Simulation results show that the proposed MRP policy outperforms the shortest distance priority (SDP) policy [4]. And the proposed CMA-AC policy obtains a better performance compared with a distributed multi-agent deep Q-network (DMA-DQN) policy, especially when the number of contents and the capacity of local cache are large. Furthermore, the proposed CMA-AC policy is robust.
| Original language | English |
|---|---|
| Article number | 9322536 |
| Journal | Proceedings - IEEE Global Communications Conference, GLOBECOM |
| DOIs | |
| Publication status | Published - 2020 |
| Externally published | Yes |
| Event | 2020 IEEE Global Communications Conference, GLOBECOM 2020 - Virtual, Taipei, Taiwan, Province of China Duration: 7 Dec 2020 → 11 Dec 2020 |
Keywords
- Cooperative caching
- deep reinforcement learning
- service scheduling
- unreliable channel