TY - JOUR
T1 - Multiple UAVs Path Planning Based on Deep Reinforcement Learning in Communication Denial Environment
AU - Xu, Yahao
AU - Wei, Yiran
AU - Jiang, Keyang
AU - Wang, Di
AU - Deng, Hongbin
N1 - Publisher Copyright:
© 2023 by the authors.
PY - 2023/1
Y1 - 2023/1
N2 - In this paper, we propose a C51-Duel-IP (C51 Dueling DQN with Independent Policy) dynamic destination path-planning algorithm to solve the problem of autonomous navigation and avoidance of multiple Unmanned Aerial Vehicles (UAVs) in the communication denial environment. Our proposed algorithm expresses the Q function output by the Dueling network as a Q distribution, which improves the fitting ability of the Q value. We also extend the single-step temporal differential (TD) to the N-step timing differential, which solves the problem of inflexible updates of the single-step temporal differential. More importantly, we use an independent policy to achieve autonomous avoidance and navigation of multiple UAVs without any communication with each other. In the case of communication rejection, the independent policy can achieve the consistency of multiple UAVs and avoid the greedy behavior of UAVs. In multiple-UAV dynamic destination scenarios, our work includes path planning, taking off from different initial positions, and dynamic path planning, taking off from the same initial position. The hardware-in-the-loop (HITL) experiment results show that our C51-Duel-IP algorithm is much more robust and effective than the original Dueling-IP and DQN-IP algorithms in an urban simulation environment. Our independent policy algorithm has similar effects as the shared policy but with the significant advantage of running in a communication denial environment.
AB - In this paper, we propose a C51-Duel-IP (C51 Dueling DQN with Independent Policy) dynamic destination path-planning algorithm to solve the problem of autonomous navigation and avoidance of multiple Unmanned Aerial Vehicles (UAVs) in the communication denial environment. Our proposed algorithm expresses the Q function output by the Dueling network as a Q distribution, which improves the fitting ability of the Q value. We also extend the single-step temporal differential (TD) to the N-step timing differential, which solves the problem of inflexible updates of the single-step temporal differential. More importantly, we use an independent policy to achieve autonomous avoidance and navigation of multiple UAVs without any communication with each other. In the case of communication rejection, the independent policy can achieve the consistency of multiple UAVs and avoid the greedy behavior of UAVs. In multiple-UAV dynamic destination scenarios, our work includes path planning, taking off from different initial positions, and dynamic path planning, taking off from the same initial position. The hardware-in-the-loop (HITL) experiment results show that our C51-Duel-IP algorithm is much more robust and effective than the original Dueling-IP and DQN-IP algorithms in an urban simulation environment. Our independent policy algorithm has similar effects as the shared policy but with the significant advantage of running in a communication denial environment.
KW - UAV path planning
KW - communication denial
KW - multi-agent reinforcement learning
KW - visual perception
UR - http://www.scopus.com/inward/record.url?scp=85146786977&partnerID=8YFLogxK
U2 - 10.3390/math11020405
DO - 10.3390/math11020405
M3 - Article
AN - SCOPUS:85146786977
SN - 2227-7390
VL - 11
JO - Mathematics
JF - Mathematics
IS - 2
M1 - 405
ER -