Optimal reinforcement learning and probabilistic-risk-based path planning and following of autonomous vehicles with obstacle avoidance

Hamid Taghavifar, Leyla Taghavifar, Chuan Hu, Chongfeng Wei, Yechen Qin*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

In this paper, a novel algorithm is proposed for the motion planning and path following automated cars with the incorporation of a collision avoidance strategy. This approach is aligned with an optimal reinforcement learning (RL) coupled with a new risk assessment approach. For this purpose, a probabilistic function-based collision avoidance strategy is developed, and the proposed RL approach learns the probability distributions of the adjacent and leading vehicles. Subsequently, the nonlinear model predictive control (NMPC) algorithm approximates the optimal steering input and the required yaw moment to follow the safest and shortest path through the optimal RL-based probabilistic risk function framework. Additionally, it is attempted to maintain the travel speed for the ego vehicle stable such that the ride comfort is also offered for the vehicle occupants. For this purpose, the steering system dynamics are also incorporated to provide a thorough understanding of the vehicle dynamics characteristic. Different driving scenarios are employed in the present paper to evaluate the proposed algorithm’s effectiveness.

Keywords

  • Automated cars
  • obstacle avoidance
  • path-planning
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Optimal reinforcement learning and probabilistic-risk-based path planning and following of autonomous vehicles with obstacle avoidance'. Together they form a unique fingerprint.

Cite this