Optimizing resource allocation in UAV-assisted ultra-dense networks for enhanced performance and security

Pei Gen Ye, Jun Zheng*, Xiaojun Ren, Jinbin Huang, Zhenxin Zhang, Yan Pang, Guang Kou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The deployment of unmanned aerial vehicles (UAVs) in ultra-dense networks (UNDs) has significantly advanced network capabilities in 5G/6G environments, addressing coverage enhancement and security concerns. Our research presents a deep reinforcement learning (DRL) based approach designed to manage the increasing data traffic demands and limited communication resources in UAV-assisted UNDs. Traditional DRL methodologies often struggle with challenges like low sample efficiency and energy wastage, which can indirectly impact network security and stability. To address these concerns, we introduce the Stabilizing Transformers based Potential Driven Reinforcement Learning (STPD-RL) framework. STPD-RL optimizes critical network operations such as transmission link selection and power allocation, directly contributing to improved energy efficiency and robust network performance. Initially, we have refined the potential driven experience replay and implemented it into resource allocation in UAV-assisted UDN for the inaugural time. By assigning a potential energy function to each state in experience replay, users can employ intrinsic state supervision to learn from a spectrum of good and bad experiences. Subsequently, we have employed stabilizing transformers to hasten the learning trajectory for resource allocation policies, thereby enhancing the stability of model training. Furthermore, we have integrated potential driven experience replay and stabilizing transformers within the Proximal Policy Optimization algorithm, thus formulating our uniquely tailored STPD-PPO. In simulations with many users and base stations, STPD-PPO outperformed traditional PPO in metrics such as entropy loss, policy loss, and value loss. Results suggest that our STPD-PPO surpasses traditional DRL algorithms in several respects, including convergence rate, energy efficiency, total power consumption, and exploration capacity.

Original languageEnglish
Article number120788
JournalInformation Sciences
Volume679
DOIs
Publication statusPublished - Sept 2024

Keywords

  • Deep reinforcement learning
  • Experience replay
  • Resource allocation
  • Transformers
  • Ultra-dense network

Fingerprint

Dive into the research topics of 'Optimizing resource allocation in UAV-assisted ultra-dense networks for enhanced performance and security'. Together they form a unique fingerprint.

Cite this