TY - JOUR
T1 - When UAVs Meet Cognitive Radio
T2 - Offloading Traffic under Uncertain Spectrum Environment via Deep Reinforcement Learning
AU - Li, Xuanheng
AU - Cheng, Sike
AU - Ding, Haichuan
AU - Pan, Miao
AU - Zhao, Nan
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2023/2/1
Y1 - 2023/2/1
N2 - The emerging Internet of Things (IoT) paradigm makes our telecommunications networks increasingly congested. Unmanned aerial vehicles (UAVs) have been regarded as a promising solution to offload the overwhelming traffic. Considering the limited spectrums, cognitive radio can be embedded into UAVs to build backhaul links through harvesting idle spectrums. For the cognitive UAV (CUAV) assisted network, how much traffic can be actually offloaded depends on not only the traffic demand but also the spectrum environment. It is necessary to jointly consider both issues and co-design the trajectory and communications for the CUAV to make data collection and data transmission balanced to achieve high offloading efficiency, which, however, is non-Trivial because of the heterogeneous and uncertain network environment. In this paper, aiming at maximizing the energy efficiency of the CUAV-Assisted traffic offloading, we jointly design the Trajectory, Time allocation for data collection and data transmission, Band selection, and Transmission power control ( $\text{T}^{\mathrm{ 3}}\text{B}$ ) considering the heterogeneous environment on traffic demand, energy replenishment, and spectrum availability. Considering the uncertain environmental information, we develop a model-free deep reinforcement learning (DRL) based solution to make the CUAV achieve the best decision autonomously. Simulation results have shown the effectiveness of the proposed DRL-$\text{T}^{\mathrm{ 3}}\text{B}$ strategy.
AB - The emerging Internet of Things (IoT) paradigm makes our telecommunications networks increasingly congested. Unmanned aerial vehicles (UAVs) have been regarded as a promising solution to offload the overwhelming traffic. Considering the limited spectrums, cognitive radio can be embedded into UAVs to build backhaul links through harvesting idle spectrums. For the cognitive UAV (CUAV) assisted network, how much traffic can be actually offloaded depends on not only the traffic demand but also the spectrum environment. It is necessary to jointly consider both issues and co-design the trajectory and communications for the CUAV to make data collection and data transmission balanced to achieve high offloading efficiency, which, however, is non-Trivial because of the heterogeneous and uncertain network environment. In this paper, aiming at maximizing the energy efficiency of the CUAV-Assisted traffic offloading, we jointly design the Trajectory, Time allocation for data collection and data transmission, Band selection, and Transmission power control ( $\text{T}^{\mathrm{ 3}}\text{B}$ ) considering the heterogeneous environment on traffic demand, energy replenishment, and spectrum availability. Considering the uncertain environmental information, we develop a model-free deep reinforcement learning (DRL) based solution to make the CUAV achieve the best decision autonomously. Simulation results have shown the effectiveness of the proposed DRL-$\text{T}^{\mathrm{ 3}}\text{B}$ strategy.
KW - UAV-Assisted network
KW - cognitive radio
KW - deep reinforcement learning
KW - energy efficiency
KW - traffic offloading
UR - http://www.scopus.com/inward/record.url?scp=85137572561&partnerID=8YFLogxK
U2 - 10.1109/TWC.2022.3198665
DO - 10.1109/TWC.2022.3198665
M3 - Article
AN - SCOPUS:85137572561
SN - 1536-1276
VL - 22
SP - 824
EP - 838
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
IS - 2
ER -