TY - JOUR
T1 - Energy Efficient Computation Offloading in Aerial Edge Networks With Multi-Agent Cooperation
AU - Liu, Wenshuai
AU - Li, Bin
AU - Xie, Wancheng
AU - Dai, Yueyue
AU - Fei, Zesong
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2023/9/1
Y1 - 2023/9/1
N2 - With the high flexibility of supporting resource-intensive and time-sensitive applications, unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) is proposed as an innovational paradigm to support the mobile users (MUs). As a promising technology, digital twin (DT) is capable of timely mapping the physical entities to virtual models, and reflecting the MEC network state in real-time. In this paper, we first propose an MEC network with multiple movable UAVs and one DT-empowered ground base station to enhance the MEC service for MUs. Considering the limited energy resource of both MUs and UAVs, we formulate an online problem of resource scheduling to minimize the weighted energy consumption of them. To tackle the difficulty of the combinational problem, we formulate it as a Markov decision process (MDP) with multiple types of agents. Since the proposed MDP has huge state space and action space, we propose a deep reinforcement learning approach based on multi-agent proximal policy optimization (MAPPO) with Beta distribution and attention mechanism to pursue the optimal computation offloading policy. Numerical results show that our proposed scheme is able to efficiently reduce the energy consumption and outperforms the benchmarks in performance, convergence speed and utilization of resources.
AB - With the high flexibility of supporting resource-intensive and time-sensitive applications, unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) is proposed as an innovational paradigm to support the mobile users (MUs). As a promising technology, digital twin (DT) is capable of timely mapping the physical entities to virtual models, and reflecting the MEC network state in real-time. In this paper, we first propose an MEC network with multiple movable UAVs and one DT-empowered ground base station to enhance the MEC service for MUs. Considering the limited energy resource of both MUs and UAVs, we formulate an online problem of resource scheduling to minimize the weighted energy consumption of them. To tackle the difficulty of the combinational problem, we formulate it as a Markov decision process (MDP) with multiple types of agents. Since the proposed MDP has huge state space and action space, we propose a deep reinforcement learning approach based on multi-agent proximal policy optimization (MAPPO) with Beta distribution and attention mechanism to pursue the optimal computation offloading policy. Numerical results show that our proposed scheme is able to efficiently reduce the energy consumption and outperforms the benchmarks in performance, convergence speed and utilization of resources.
KW - Mobile edge computing
KW - computation offloading
KW - deep reinforcement learning
KW - digital twin
KW - unmanned aerial vehicle
UR - http://www.scopus.com/inward/record.url?scp=85147313484&partnerID=8YFLogxK
U2 - 10.1109/TWC.2023.3235997
DO - 10.1109/TWC.2023.3235997
M3 - Article
AN - SCOPUS:85147313484
SN - 1536-1276
VL - 22
SP - 5725
EP - 5739
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
IS - 9
ER -