Autonomous Navigation Method Based on Deep Reinforcement Learning with Dual-Layer Perception Mechanism

Ranhui Yang, Yuepeng Tang, Guangming Xiong*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Autonomous navigation systems are crucial in the field of robotics. Traditional methods often require extensive manual parameter tuning, which is time-consuming. In this paper, we present an autonomous navigation method that leverages deep reinforcement learning (DRL) enhanced by a dual-layer perception mechanism. This method integrates raw sensor data, local grid map, and the goal pose as inputs. Itutilizes the DRL framework to develop autonomous navigation strategies and directly generates robot action commands. This method eliminates the need for manual parameter adjustment, relying solely on continuous trial-and-error training to enable autonomous navigation. Comparative experiments in a simulation environment reveal that this system offers enhanced robustness and scalability compared to other DRL-based autonomous navigation systems.

Original languageEnglish
Title of host publicationProceedings of 2024 IEEE International Conference on Unmanned Systems, ICUS 2024
EditorsRong Song
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages567-572
Number of pages6
ISBN (Electronic)9798350384185
DOIs
Publication statusPublished - 2024
Event2024 IEEE International Conference on Unmanned Systems, ICUS 2024 - Nanjing, China
Duration: 18 Oct 202420 Oct 2024

Publication series

NameProceedings of 2024 IEEE International Conference on Unmanned Systems, ICUS 2024

Conference

Conference2024 IEEE International Conference on Unmanned Systems, ICUS 2024
Country/TerritoryChina
CityNanjing
Period18/10/2420/10/24

Keywords

  • autonomous navigation
  • grid map
  • reinforcement learning

Cite this