TY - JOUR
T1 - Eco-driving for connected automated hybrid electric vehicles in learning-enabled layered transportation systems
AU - Yan, Su
AU - Fang, Jiayi
AU - Yang, Chao
AU - Chen, Ruihu
AU - Liu, Hui
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/5
Y1 - 2025/5
N2 - Eco-driving strategies have the potential to enhance energy savings, safety, and transportation efficiency by optimizing vehicle interactions with dynamic traffic environments. This study addresses the challenge of balancing computational efficiency and optimization effectiveness amid the high-dimensional state and control variables driven by extensive traffic information. The novelty different from existing methods lies in developing an eco-driving strategy within a traffic information cyber–physical system. The cyber-layer maps simulated road segments for training vehicles equipped with the Proximal Policy Optimization (PPO) algorithm, enabling effective planning of economical speeds. During vehicle operation, the cyber-layer maps the real-time physical environment, providing a predictive state sequence for the vehicle's adaptive equivalent fuel consumption minimization strategy. Then, optimizing the efficiency factor in a rolling manner further improves fuel economy. A comparative analysis with existing methods across different scenarios shows that the proposed strategy significantly improves fuel economy while ensuring real-time speed planning and reliable speed-tracking performance.
AB - Eco-driving strategies have the potential to enhance energy savings, safety, and transportation efficiency by optimizing vehicle interactions with dynamic traffic environments. This study addresses the challenge of balancing computational efficiency and optimization effectiveness amid the high-dimensional state and control variables driven by extensive traffic information. The novelty different from existing methods lies in developing an eco-driving strategy within a traffic information cyber–physical system. The cyber-layer maps simulated road segments for training vehicles equipped with the Proximal Policy Optimization (PPO) algorithm, enabling effective planning of economical speeds. During vehicle operation, the cyber-layer maps the real-time physical environment, providing a predictive state sequence for the vehicle's adaptive equivalent fuel consumption minimization strategy. Then, optimizing the efficiency factor in a rolling manner further improves fuel economy. A comparative analysis with existing methods across different scenarios shows that the proposed strategy significantly improves fuel economy while ensuring real-time speed planning and reliable speed-tracking performance.
KW - Connected and automated plug-in hybrid electric vehicle
KW - Deep reinforcement learning
KW - Eco-driving
KW - Economic speed planning
KW - Energy management strategy
UR - http://www.scopus.com/inward/record.url?scp=85218857386&partnerID=8YFLogxK
U2 - 10.1016/j.trd.2025.104677
DO - 10.1016/j.trd.2025.104677
M3 - Article
AN - SCOPUS:85218857386
SN - 1361-9209
VL - 142
JO - Transportation Research Part D: Transport and Environment
JF - Transportation Research Part D: Transport and Environment
M1 - 104677
ER -