Abstract
For the Unmanned Surface Vehicle (USV) in unknown environment, the requirements of the adaptability and real-time are strongly demanding. To this end, this paper proposes a a path planning algorithm based on Deep Reinforcement Learning (DRL). For the request of plan-avoid-acclimate, on the basis of A3C, the proposed method optimizes net architecture, enriches navigation data and re-regulate the action space of the agent. Three kinds of maps are used for targeted training to improve the flexibility. By combining with the GPU platform, the pre-training data are collected with deep neural networks. In this way, the training efficiency is improved and the real-time requirement is guaranteed. Experimental results show that, in comparison with current methods, the training time reduces by 59.3% and the efficiency rises by more than 79.5%. Moreover, the performance of the trained model in unknown environment is effectively enhanced.
Translated title of the contribution | A Real-Time USV Path Planning Algorithm in Unknown Environment Based on Deep Reinforcement Learning |
---|---|
Original language | Chinese (Traditional) |
Pages (from-to) | 86-92 |
Number of pages | 7 |
Journal | Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology |
Volume | 39 |
Publication status | Published - Oct 2019 |