TY - JOUR
T1 - Dynamic-Adaptive Eco-driving Strategy for DDEVs via State-Aware Deep Reinforcement Learning
AU - Fan, Yi
AU - He, Hongwen
AU - Wu, Changcheng
AU - Zhou, Yang
AU - Peng, Jiankun
N1 - Publisher Copyright:
© Published under licence by IOP Publishing Ltd.
PY - 2025/9/1
Y1 - 2025/9/1
N2 - This paper proposes a deep reinforcement learning (DRL)-based ecodriving strategy for distributed drive electric vehicles (DDEVs) to enhance energy efficiency and dynamic adaptability in complex traffic scenarios. Conventional DRL approaches often employ fixed exploration strategies that are insensitive to dynamic environmental states, leading to suboptimal and unstable performance when handling the unique sensitivity of DDEVs to stochastic disturbances. To address this limitation, the proposed method adaptively adjusting exploration noise based on environmental states by using state-dependent exploration (SDE). A high-fidelity 7-degree-of-freedom vehicle dynamics model and a traffic simulation environment are employed to validate the method. Results demonstrate that SDE-SAC achieves superior performance compared to baseline methods, with a 0.56 kWh power consumption, and a 27.2 m/s mean velocity, alongside improved motion stability. Furthermore, the method exhibits strong generalization in high-density traffic scenarios, maintaining energy efficiency and safety. This work advances the development of adaptive energy management systems for DDEVs by bridging the gap between environmental uncertainty and vehicular control.
AB - This paper proposes a deep reinforcement learning (DRL)-based ecodriving strategy for distributed drive electric vehicles (DDEVs) to enhance energy efficiency and dynamic adaptability in complex traffic scenarios. Conventional DRL approaches often employ fixed exploration strategies that are insensitive to dynamic environmental states, leading to suboptimal and unstable performance when handling the unique sensitivity of DDEVs to stochastic disturbances. To address this limitation, the proposed method adaptively adjusting exploration noise based on environmental states by using state-dependent exploration (SDE). A high-fidelity 7-degree-of-freedom vehicle dynamics model and a traffic simulation environment are employed to validate the method. Results demonstrate that SDE-SAC achieves superior performance compared to baseline methods, with a 0.56 kWh power consumption, and a 27.2 m/s mean velocity, alongside improved motion stability. Furthermore, the method exhibits strong generalization in high-density traffic scenarios, maintaining energy efficiency and safety. This work advances the development of adaptive energy management systems for DDEVs by bridging the gap between environmental uncertainty and vehicular control.
UR - https://www.scopus.com/pages/publications/105022745420
U2 - 10.1088/1742-6596/3125/1/012004
DO - 10.1088/1742-6596/3125/1/012004
M3 - Conference article
AN - SCOPUS:105022745420
SN - 1742-6588
VL - 3125
JO - Journal of Physics: Conference Series
JF - Journal of Physics: Conference Series
IS - 1
M1 - 012004
T2 - 1st International Conference on Green Energy and Intelligent Transportation, ICGEITS 2025
Y2 - 29 July 2025 through 31 July 2025
ER -