TY - JOUR
T1 - Energy-efficient UAV control for effective and fair communication coverage
T2 - A deep reinforcement learning approach
AU - Liu, Chi Harold
AU - Chen, Zheyu
AU - Tang, Jian
AU - Xu, Jie
AU - Piao, Chengzhe
N1 - Publisher Copyright:
© 1983-2012 IEEE.
PY - 2018/9
Y1 - 2018/9
N2 - Unmanned aerial vehicles (UAVs) can be used to serve as aerial base stations to enhance both the coverage and performance of communication networks in various scenarios, such as emergency communications and network access for remote areas. Mobile UAVs can establish communication links for ground users to deliver packets. However, UAVs have limited communication ranges and energy resources. Particularly, for a large region, they cannot cover the entire area all the time or keep flying for a long time. It is thus challenging to control a group of UAVs to achieve certain communication coverage in a long run, while preserving their connectivity and minimizing their energy consumption. Toward this end, we propose to leverage emerging deep reinforcement learning (DRL) for UAV control and present a novel and highly energy-efficient DRL-based method, which we call DRL-based energy-efficient control for coverage and connectivity (DRL-EC3). The proposed method 1) maximizes a novel energy efficiency function with joint consideration for communications coverage, fairness, energy consumption and connectivity; 2) learns the environment and its dynamics; and 3) makes decisions under the guidance of two powerful deep neural networks. We conduct extensive simulations for performance evaluation. Simulation results have shown that DRL-EC3 significantly and consistently outperform two commonly used baseline methods in terms of coverage, fairness, and energy consumption.
AB - Unmanned aerial vehicles (UAVs) can be used to serve as aerial base stations to enhance both the coverage and performance of communication networks in various scenarios, such as emergency communications and network access for remote areas. Mobile UAVs can establish communication links for ground users to deliver packets. However, UAVs have limited communication ranges and energy resources. Particularly, for a large region, they cannot cover the entire area all the time or keep flying for a long time. It is thus challenging to control a group of UAVs to achieve certain communication coverage in a long run, while preserving their connectivity and minimizing their energy consumption. Toward this end, we propose to leverage emerging deep reinforcement learning (DRL) for UAV control and present a novel and highly energy-efficient DRL-based method, which we call DRL-based energy-efficient control for coverage and connectivity (DRL-EC3). The proposed method 1) maximizes a novel energy efficiency function with joint consideration for communications coverage, fairness, energy consumption and connectivity; 2) learns the environment and its dynamics; and 3) makes decisions under the guidance of two powerful deep neural networks. We conduct extensive simulations for performance evaluation. Simulation results have shown that DRL-EC3 significantly and consistently outperform two commonly used baseline methods in terms of coverage, fairness, and energy consumption.
KW - UAV control
KW - communication coverage
KW - deep reinforcement learning
KW - energy efficiency
UR - http://www.scopus.com/inward/record.url?scp=85052556450&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2018.2864373
DO - 10.1109/JSAC.2018.2864373
M3 - Article
AN - SCOPUS:85052556450
SN - 0733-8716
VL - 36
SP - 2059
EP - 2070
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 9
M1 - 8432464
ER -