Abstract
To solve the path planning problem of multiple UAVs' synchronous arrival at the target, the battlefield environment model and the Markov decision process model of the path planning for a single UAV is established, and the optimal path is calculated based on the Q-learning algorithm. With this algorithm, the Q-table is obtained and used to calculate the shortest path of each UAV and the cooperative range. Then the time-coordinated paths is obtained by adjusting the action selection strategy of the circumventing UAVs. Considering the collision avoidance problem of multiple UAVs, the partical replanning area is determined by designing retreat parameters, and based on the deep reinforcement learning theory, the neural network is used to replace Q-table to re-plan the partical path for UAVs, which can avoid the problem of dimensional explosion. As for the previously unexplored obstacles, the obstacle matrix is designed based on the idea of the artificial potential field theory, which is then superimposed on the original Q-table to realize collision avoidance for the unexplored obstacle. The simulation results verify that with the proposed reinforcement learning path planning method, the coordinated paths with time coordination and collision avoidance can be obtained, and the previously unexplored obstacles in the simulation can be avoided as well. Compared with A* algorithm, the proposed method can achieve higher efficiency for online application problems.
Translated title of the contribution | Q-Learning-based Multi-UAV Cooperative Path Planning Method |
---|---|
Original language | Chinese (Traditional) |
Pages (from-to) | 484-495 |
Number of pages | 12 |
Journal | Binggong Xuebao/Acta Armamentarii |
Volume | 44 |
Issue number | 2 |
DOIs | |
Publication status | Published - Feb 2023 |