Adaptive Speed Planning of Connected and Automated Vehicles Using Multi-Light Trained Deep Reinforcement Learning

Bo Liu, Chao Sun*, Bo Wang, Fengchun Sun

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

50 Citations (Scopus)

Abstract

Through shared real-time traffic information and perception of complex environments, connected and automated vehicles (CAVs) are endowed with global decision-making capabilities far beyond that of human drivers. Given more traffic lights information, this planning ability can be greatly strengthened. This study proposed an adaptive speed planning method of CAVs based on multi-light trained deep reinforcement learning (DRL), aiming to improve the fuel economy and comfort of CAVs. By setting a reasonable reward function, the training algorithm takes the key environmental information received by the vehicle as inputs, and finally outputs the acceleration that maximizes the cumulative reward. The results show that the trained DRL agent can adapt to variable scenarios with traffic lights, and quickly solve the approximate optimal speed trajectory. Multi-light DRL models save 6.79% fuel compared with single-light ones and have better performance on fuel economy and computational efficiency than a non-RL method using multi-light optimization.

Original languageEnglish
Pages (from-to)3533-3546
Number of pages14
JournalIEEE Transactions on Vehicular Technology
Volume71
Issue number4
DOIs
Publication statusPublished - 1 Apr 2022

Keywords

  • Adaptive speed planning
  • connected and automated vehicle
  • deep deterministic policy gradient
  • deep reinforcement learning
  • eco-driving

Fingerprint

Dive into the research topics of 'Adaptive Speed Planning of Connected and Automated Vehicles Using Multi-Light Trained Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this