TY - JOUR
T1 - Learning-Based Energy-Efficient Data Collection by Unmanned Vehicles in Smart Cities
AU - Zhang, Bo
AU - Liu, Chi Harold
AU - Tang, Jian
AU - Xu, Zhiyuan
AU - Ma, Jian
AU - Wang, Wendong
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2018/4
Y1 - 2018/4
N2 - Mobile crowdsourcing (MCS) is now an important source of information for smart cities, especially with the help of unmanned aerial vehicles (UAVs) and driverless cars. They are equipped with different kinds of high-precision sensors, and can be scheduled/controlled completely during data collection, which will make MCS system more robust. However, they are limited to energy constraint, especially for long-term, long-distance sensing tasks, and cities are almost too crowded to set stationary charging station. Towards this end, in this paper we propose to leverage emerging deep reinforcement learning (DRL) techniques for enabling model-free unmanned vehicles control, and present a novel and highly effective control framework, called 'DRL-RVC.' It utilizes the powerful convolutional neural network for feature extraction of the necessary information (including sample distribution, traffic flow, etc.), then makes decisions under the guidance of the deep Q network. That is, UAVs will cruise in the city without control and collect most required data in the sensing region, while mobile unmanned charging station will reach the charging point in the shortest possible time. Finally, we validate and evaluate the proposed framework via extensive simulations based on a real dataset in Rome. Extensive simulation results well justify the effectiveness and robustness of our approach.
AB - Mobile crowdsourcing (MCS) is now an important source of information for smart cities, especially with the help of unmanned aerial vehicles (UAVs) and driverless cars. They are equipped with different kinds of high-precision sensors, and can be scheduled/controlled completely during data collection, which will make MCS system more robust. However, they are limited to energy constraint, especially for long-term, long-distance sensing tasks, and cities are almost too crowded to set stationary charging station. Towards this end, in this paper we propose to leverage emerging deep reinforcement learning (DRL) techniques for enabling model-free unmanned vehicles control, and present a novel and highly effective control framework, called 'DRL-RVC.' It utilizes the powerful convolutional neural network for feature extraction of the necessary information (including sample distribution, traffic flow, etc.), then makes decisions under the guidance of the deep Q network. That is, UAVs will cruise in the city without control and collect most required data in the sensing region, while mobile unmanned charging station will reach the charging point in the shortest possible time. Finally, we validate and evaluate the proposed framework via extensive simulations based on a real dataset in Rome. Extensive simulation results well justify the effectiveness and robustness of our approach.
KW - Data crowdsourcing
KW - energy-efficiency
KW - smart city
UR - http://www.scopus.com/inward/record.url?scp=85038872354&partnerID=8YFLogxK
U2 - 10.1109/TII.2017.2783439
DO - 10.1109/TII.2017.2783439
M3 - Article
AN - SCOPUS:85038872354
SN - 1551-3203
VL - 14
SP - 1666
EP - 1676
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
IS - 4
ER -