Continuous advantage learning for minimum-time trajectory planning of autonomous vehicles

Zhuo Li, Weiran Wu, Jialin Wang, Gang Wang, Jian Sun*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

This paper investigates the minimum-time trajectory planning problem of an autonomous vehicle. To deal with unknown and uncertain dynamics of the vehicle, the trajectory planning problem is modeled as a Markov decision process with a continuous action space. To solve it, we propose a continuous advantage learning (CAL) algorithm based on the advantage-value equation, and adopt a stochastic policy in the form of multivariate Gaussian distribution to encourage exploration. A shared actor-critic architecture is designed to simultaneously approximate the stochastic policy and the value function, which greatly reduces the computation burden compared to general actor-critic methods. Moreover, the shared actor-critic is updated with a loss function built as mean square consistency error of the advantage-value equation, and the update step is performed several times at each time step to improve data efficiency. Simulations validate the effectiveness of the proposed CAL algorithm and its better performance than the soft actor-critic algorithm.

Original languageEnglish
Article number172206
JournalScience China Information Sciences
Volume67
Issue number7
DOIs
Publication statusPublished - Jul 2024

Keywords

  • continuous advantage learning
  • shared actor-critic
  • stochastic policy
  • trajectory planning

Fingerprint

Dive into the research topics of 'Continuous advantage learning for minimum-time trajectory planning of autonomous vehicles'. Together they form a unique fingerprint.

Cite this