D2D-Assisted Multi-User Cooperative Partial Offloading in MEC Based on Deep Reinforcement Learning

Xin Guan, Tiejun Lv, Zhipeng Lin*, Pingmu Huang, Jie Zeng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Mobile edge computing (MEC) and device-to-device (D2D) communication can alleviate the resource constraints of mobile devices and reduce communication latency. In this paper, we construct a D2D-MEC framework and study the multi-user cooperative partial offloading and computing resource allocation. We maximize the number of devices under the maximum delay constraints of the application and the limited computing resources. In the considered system, each user can offload its tasks to an edge server and a nearby D2D device. We first formulate the optimization problem as an NP-hard problem and then decouple it into two subproblems. The convex optimization method is used to solve the first subproblem, and the second subproblem is defined as a Markov decision process (MDP). A deep reinforcement learning algorithm based on a deep Q network (DQN) is developed to maximize the amount of tasks that the system can compute. Extensive simulation results demonstrate the effectiveness and superiority of the proposed scheme.

Original languageEnglish
Article number7004
JournalSensors
Volume22
Issue number18
DOIs
Publication statusPublished - Sept 2022

Keywords

  • D2D communication
  • Q learning
  • deep Q-network
  • mobile edge computing
  • partial offloading

Fingerprint

Dive into the research topics of 'D2D-Assisted Multi-User Cooperative Partial Offloading in MEC Based on Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this