Research into Autonomous Vehicles Following and Obstacle Avoidance Based on Deep Reinforcement Learning Method under Map Constraints

Zheng Li*, Shihua Yuan, Xufeng Yin, Xueyuan Li, Shouxing Tang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

Compared with traditional rule-based algorithms, deep reinforcement learning methods in autonomous driving are able to reduce the response time of vehicles to the driving environment and fully exploit the advantages of autopilot. Nowadays, autonomous vehicles mainly drive on urban roads and are constrained by some map elements such as lane boundaries, lane driving rules, and lane center lines. In this paper, a deep reinforcement learning approach seriously considering map elements is proposed to deal with the autonomous driving issues of vehicles following and obstacle avoidance. When the deep reinforcement learning method is modeled, an obstacle representation method is proposed to represent the external obstacle information required by the ego vehicle input, aiming to address the problem that the number and state of external obstacles are not fixed.

Original languageEnglish
Article number844
JournalSensors
Volume23
Issue number2
DOIs
Publication statusPublished - Jan 2023

Keywords

  • autonomous driving
  • car following
  • deep reinforcement learning
  • obstacle avoidance
  • obstacle representation

Fingerprint

Dive into the research topics of 'Research into Autonomous Vehicles Following and Obstacle Avoidance Based on Deep Reinforcement Learning Method under Map Constraints'. Together they form a unique fingerprint.

Cite this