TY - JOUR
T1 - Distributed Energy-Efficient Multi-UAV Navigation for Long-Term Communication Coverage by Deep Reinforcement Learning
AU - Liu, Chi Harold
AU - Ma, Xiaoxin
AU - Gao, Xudong
AU - Tang, Jian
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2020/6/1
Y1 - 2020/6/1
N2 - In this paper, we aim to design a fully-distributed control solution to navigate a group of unmanned aerial vehicles (UAVs), as the mobile Base Stations (BSs) to fly around a target area, to provide long-term communication coverage for the ground mobile users. Different from existing solutions that mainly solve the problem from optimization perspectives, we proposed a decentralized deep reinforcement learning (DRL) based framework to control each UAV in a distributed manner. Our goal is to maximize the temporal average coverage score achieved by all UAVs in a task, maximize the geographical fairness of all considered point-of-interests (PoIs), and minimize the total energy consumptions, while keeping them connected and not flying out of the area border. We designed the state, observation, action space, and reward in an explicit manner, and model each UAV by deep neural networks (DNNs). We conducted extensive simulations and found the appropriate set of hyperparameters, including experience replay buffer size, number of neural units for two fully-connected hidden layers of actor, critic, and their target networks, and the discount factor for remembering the future reward. The simulation results justified the superiority of the proposed model over the state-of-the-art DRL-EC3 approach based on deep deterministic policy gradient (DDPG), and three other baselines.
AB - In this paper, we aim to design a fully-distributed control solution to navigate a group of unmanned aerial vehicles (UAVs), as the mobile Base Stations (BSs) to fly around a target area, to provide long-term communication coverage for the ground mobile users. Different from existing solutions that mainly solve the problem from optimization perspectives, we proposed a decentralized deep reinforcement learning (DRL) based framework to control each UAV in a distributed manner. Our goal is to maximize the temporal average coverage score achieved by all UAVs in a task, maximize the geographical fairness of all considered point-of-interests (PoIs), and minimize the total energy consumptions, while keeping them connected and not flying out of the area border. We designed the state, observation, action space, and reward in an explicit manner, and model each UAV by deep neural networks (DNNs). We conducted extensive simulations and found the appropriate set of hyperparameters, including experience replay buffer size, number of neural units for two fully-connected hidden layers of actor, critic, and their target networks, and the discount factor for remembering the future reward. The simulation results justified the superiority of the proposed model over the state-of-the-art DRL-EC3 approach based on deep deterministic policy gradient (DDPG), and three other baselines.
KW - UAV control
KW - communication coverage
KW - deep reinforcement learning
KW - energy efficiency
UR - http://www.scopus.com/inward/record.url?scp=85084932368&partnerID=8YFLogxK
U2 - 10.1109/TMC.2019.2908171
DO - 10.1109/TMC.2019.2908171
M3 - Article
AN - SCOPUS:85084932368
SN - 1536-1233
VL - 19
SP - 1274
EP - 1285
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 6
M1 - 8676325
ER -