TY - JOUR
T1 - Adaptive Eco-Driving With Guided Speed Planning and Lane Changing Through Signalized Intersections
AU - Leng, Jianghao
AU - Sun, Chao
AU - Dong, Haoxuan
AU - Li, Dongjun
AU - Zhang, Chuntao
AU - Chen, Peter C.Y.
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2025
Y1 - 2025
N2 - In dynamic traffic flow conditions, lane-changing maneuvers hold significant potential for achieving energy and time efficiency. However, existing research often overlooks the influence of a global reference speed trajectory, especially in urban settings with multiple signalized intersections. To address this gap, this study proposes an eco-driving strategy for connected and automated vehicles (CAVs) that integrates deep reinforcement learning (DRL) and model predictive control (MPC), considering the impact of a guided speed profile with good synergy. A three-stage speed planning framework, following a coarse-smooth-optimization manner, is introduced to efficiently generate an energy-saving guided speed profile. After that, the guided speed profile is delivered into DRL, serving as a network input together with information on surrounding human driver vehicles (HDVs). The DRL adopts a soft actor-critic (SAC) algorithm integrating with an MPC controller, which can generate control outputs for both lane-changing and car-following maneuvers based on the DRL decisions and guided speed profile. Additionally, the MPC also certifies decisions for vehicle safety. Simulation results indicate that compared to benchmark methods, the proposed strategy achieves energy savings of up to 23.6% while maintaining high computational efficiency, which is accompanied by a smaller time delay approaching intersections and ensuring vehicle safety.
AB - In dynamic traffic flow conditions, lane-changing maneuvers hold significant potential for achieving energy and time efficiency. However, existing research often overlooks the influence of a global reference speed trajectory, especially in urban settings with multiple signalized intersections. To address this gap, this study proposes an eco-driving strategy for connected and automated vehicles (CAVs) that integrates deep reinforcement learning (DRL) and model predictive control (MPC), considering the impact of a guided speed profile with good synergy. A three-stage speed planning framework, following a coarse-smooth-optimization manner, is introduced to efficiently generate an energy-saving guided speed profile. After that, the guided speed profile is delivered into DRL, serving as a network input together with information on surrounding human driver vehicles (HDVs). The DRL adopts a soft actor-critic (SAC) algorithm integrating with an MPC controller, which can generate control outputs for both lane-changing and car-following maneuvers based on the DRL decisions and guided speed profile. Additionally, the MPC also certifies decisions for vehicle safety. Simulation results indicate that compared to benchmark methods, the proposed strategy achieves energy savings of up to 23.6% while maintaining high computational efficiency, which is accompanied by a smaller time delay approaching intersections and ensuring vehicle safety.
KW - Connected and automated vehicles (CAVs)
KW - deep reinforcement learning (DRL)
KW - eco-driving
KW - lane changing
KW - model predictive control (MPC) certification
KW - signalized intersections
UR - http://www.scopus.com/inward/record.url?scp=85217981139&partnerID=8YFLogxK
U2 - 10.1109/TTE.2025.3540057
DO - 10.1109/TTE.2025.3540057
M3 - Article
AN - SCOPUS:85217981139
SN - 2332-7782
VL - 11
SP - 8365
EP - 8376
JO - IEEE Transactions on Transportation Electrification
JF - IEEE Transactions on Transportation Electrification
IS - 3
ER -