TY - JOUR
T1 - Delay-Sensitive Energy-Efficient UAV Crowdsensing by Deep Reinforcement Learning
AU - Dai, Zipeng
AU - Liu, Chi Harold
AU - Han, Rui
AU - Wang, Guoren
AU - Leung, Kin K.
AU - Tang, Jian
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - Mobile crowdsensing (MCS) by unmanned aerial vehicles (UAVs) servicing delay-sensitive applications becomes popular by navigating a group of UAVs to take advantage of their equipped high-precision sensors and durability for data collection in harsh environments. In this paper, we aim to simultaneously maximize collected data amount, geographical fairness, and minimize the energy consumption of all UAVs, as well as to guarantee the data freshness by setting a deadline in each timeslot. Specifically, we propose a centralized control, distributed execution framework by decentralized deep reinforcement learning (DRL) for delay-sensitive and energy-efficient UAV crowdsensing, called 'DRL-eFresh'. It includes a synchronous computational architecture with GRU sequential modeling to generate multi-UAV navigation decisions. Also, we derive an optimal time allocation solution for data collection while considering all UAV efforts and avoiding much data dropout due to limited data upload time and wireless data rate. Simulation results show that DRL-eFresh significantly improves the energy efficiency, as compared to the best baseline DPPO, by 14% and 22% on average when varying different sensing ranges and number of PoIs, respectively.
AB - Mobile crowdsensing (MCS) by unmanned aerial vehicles (UAVs) servicing delay-sensitive applications becomes popular by navigating a group of UAVs to take advantage of their equipped high-precision sensors and durability for data collection in harsh environments. In this paper, we aim to simultaneously maximize collected data amount, geographical fairness, and minimize the energy consumption of all UAVs, as well as to guarantee the data freshness by setting a deadline in each timeslot. Specifically, we propose a centralized control, distributed execution framework by decentralized deep reinforcement learning (DRL) for delay-sensitive and energy-efficient UAV crowdsensing, called 'DRL-eFresh'. It includes a synchronous computational architecture with GRU sequential modeling to generate multi-UAV navigation decisions. Also, we derive an optimal time allocation solution for data collection while considering all UAV efforts and avoiding much data dropout due to limited data upload time and wireless data rate. Simulation results show that DRL-eFresh significantly improves the energy efficiency, as compared to the best baseline DPPO, by 14% and 22% on average when varying different sensing ranges and number of PoIs, respectively.
KW - UAV crowdsensing
KW - deep reinforcement learning
KW - delay-sensitive applications
KW - energy-efficiency
UR - http://www.scopus.com/inward/record.url?scp=85115142382&partnerID=8YFLogxK
U2 - 10.1109/TMC.2021.3113052
DO - 10.1109/TMC.2021.3113052
M3 - Article
AN - SCOPUS:85115142382
SN - 1536-1233
VL - 22
SP - 2038
EP - 2052
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 4
ER -