Data-Empowered Trajectory Planning Based on Two-Phase Deep Reinforcement Learning Method

Yin Peng, Yiwei Liu*, Linye Wang, Yizheng Ge, Weihao Yan, Lihui Feng, Yufan Du

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Existing trajectory planning methods for distributed unmanned systems are limited and constrained by data-empowered information from environments. A deep reinforcement learning-based two-phase trajectory planning method is proposed, including mission transition phase (MTP) and mission maintenance phase (MMP). During MTP, the mobile node transfers from the current to the target position while avoiding obstacles. Meanwhile, assisted communication among nodes in the mission area exists for MMP. The deep learning model is designed for these two phases respectively to realize trajectory planning. The optimal model that improves the planning reward is obtained by using experience pools and sampling. Further, it is capable of dealing with complex and high-dimensional optimization as well as adapting to the dynamic environment, making trajectory planning more accurate and efficient. By consuming similar time steps to the optimal path method and 1/3 time steps of coordinate transition methods, the safety of unmanned system is guaranteed and energy consumption is reduced. Moreover, obvious advantages of the method are illustrated in the deployment of large-scale network scenes and auxiliary communication task is fulfilled with simplified processes, resulting in 2/3 and 1/4 computation complexities of particle swarm optimization and scanning methods.

Original languageEnglish
JournalIEEE Internet of Things Journal
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • Communication Network
  • Deep Reinforcement Learning
  • Trajectory Planning
  • Unmanned Aerial Vehicles

Cite this