TY - JOUR
T1 - Resilience-Driven Topology Reconfiguration via Hierarchical Deep Reinforcement Learning in Low-Altitude UAV Networks
AU - Zhang, Jingbin
AU - Zheng, Dezhi
AU - Yang, Zhengzhi
AU - Li, Yumeng
AU - Du, Wenbo
AU - Quek, Tony Q.S.
AU - Wang, Shuai
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2026
Y1 - 2026
N2 - In low-altitude wireless networks, Unmanned Aerial Vehicles (UAVs) are vulnerable to environmental disturbances, which can cause failures, disrupting network topology and weakening coverage and backhaul. Consequently, network reconfiguration has become an urgent problem. Such reconfiguration must jointly consider coverage and backhaul, while obstacles in low-altitude environments further increase its complexity and challenges. To address this problem, we propose UR-HDRL (UAV network Reconfiguration based on Hierarchical Deep Reinforcement Learning), a novel framework that adopts a hierarchical architecture to decouple safety constraints from communication performance optimization. The algorithm integrates Control Barrier Functions (CBFs) and Graph Neural Networks (GNNs) to ensure safety and enhance collaborative decision-making in environments with obstacles. Experimental results indicate that UR-HDRL achieves significant improvements in data transmission efficiency, network coverage, and collision avoidance compared with baseline methods. The results also reveal distinct differences between communication coverage and sensing coverage, highlighting the inherent trade-offs between them.
AB - In low-altitude wireless networks, Unmanned Aerial Vehicles (UAVs) are vulnerable to environmental disturbances, which can cause failures, disrupting network topology and weakening coverage and backhaul. Consequently, network reconfiguration has become an urgent problem. Such reconfiguration must jointly consider coverage and backhaul, while obstacles in low-altitude environments further increase its complexity and challenges. To address this problem, we propose UR-HDRL (UAV network Reconfiguration based on Hierarchical Deep Reinforcement Learning), a novel framework that adopts a hierarchical architecture to decouple safety constraints from communication performance optimization. The algorithm integrates Control Barrier Functions (CBFs) and Graph Neural Networks (GNNs) to ensure safety and enhance collaborative decision-making in environments with obstacles. Experimental results indicate that UR-HDRL achieves significant improvements in data transmission efficiency, network coverage, and collision avoidance compared with baseline methods. The results also reveal distinct differences between communication coverage and sensing coverage, highlighting the inherent trade-offs between them.
KW - Low-altitude Wireless Networks
KW - Multi-agent Reinforcement Learning
KW - Network Reconfiguration
KW - UAV Communications
UR - https://www.scopus.com/pages/publications/105026462271
U2 - 10.1109/TNSE.2025.3650347
DO - 10.1109/TNSE.2025.3650347
M3 - Article
AN - SCOPUS:105026462271
SN - 2327-4697
JO - IEEE Transactions on Network Science and Engineering
JF - IEEE Transactions on Network Science and Engineering
ER -