TY - JOUR
T1 - Tackling SOC long-term dynamic for energy management of hybrid electric buses via adaptive policy optimization
AU - Zhang, Hailong
AU - Peng, Jiankun
AU - Tan, Huachun
AU - Dong, Hanxuan
AU - Ding, Fan
AU - Ran, Bin
N1 - Publisher Copyright:
© 2020 Elsevier Ltd
PY - 2020/7/1
Y1 - 2020/7/1
N2 - Plug-in hybrid electric buses (PHEBs) have the potential to satisfy both the fuel efficiency and the driving-mileage under complex urban traffic conditions. However, the optimal charge and discharge management is still a pivotal challenge of energy management for the inherent uncertainty in driving conditions. The common reference state-of-charge (SOC) profile based methods are limited by the adaptiveness which restricts the economic performance of on-line energy management systems. Promisingly, reinforcement learning based energy management strategies exhibited the significant self-learning ability. However, for PHEBs, the sparse rewards by the long-term SOC shortage make the strategies easily trick into the local optimal solution. The work presented in this paper concentrates on combining battery power reduction in the form of conditional entropy into reinforcement learning based energy management strategy. The proposed method named adaptive policy optimization (APO) introduces a novel advantage function to evaluate energy-saving performance considering long-term SOC dynamic, and a Bayesian neural network based SOC shortage probability estimator is utilized to optimize the energy management strategy parameterized by a deep neural network. Several experiments in a standard driving cycle demonstrate the optimality, self-learning ability and convergence of the APO. Moreover, the adaptability and robust performance get validated over the real bus trajectories data. With the comprehensive experiments in this paper, the proposed model exhibits enhanced fuel economy and more suitable SOC planning in comparison with the existing energy management strategies. The results indicate that APO respectively outperforms the compared online strategies by 9.8% and 2.6% and reaches 98% energy-saving rate of the offline global optimum.
AB - Plug-in hybrid electric buses (PHEBs) have the potential to satisfy both the fuel efficiency and the driving-mileage under complex urban traffic conditions. However, the optimal charge and discharge management is still a pivotal challenge of energy management for the inherent uncertainty in driving conditions. The common reference state-of-charge (SOC) profile based methods are limited by the adaptiveness which restricts the economic performance of on-line energy management systems. Promisingly, reinforcement learning based energy management strategies exhibited the significant self-learning ability. However, for PHEBs, the sparse rewards by the long-term SOC shortage make the strategies easily trick into the local optimal solution. The work presented in this paper concentrates on combining battery power reduction in the form of conditional entropy into reinforcement learning based energy management strategy. The proposed method named adaptive policy optimization (APO) introduces a novel advantage function to evaluate energy-saving performance considering long-term SOC dynamic, and a Bayesian neural network based SOC shortage probability estimator is utilized to optimize the energy management strategy parameterized by a deep neural network. Several experiments in a standard driving cycle demonstrate the optimality, self-learning ability and convergence of the APO. Moreover, the adaptability and robust performance get validated over the real bus trajectories data. With the comprehensive experiments in this paper, the proposed model exhibits enhanced fuel economy and more suitable SOC planning in comparison with the existing energy management strategies. The results indicate that APO respectively outperforms the compared online strategies by 9.8% and 2.6% and reaches 98% energy-saving rate of the offline global optimum.
KW - Energy management
KW - Intelligent bus system
KW - Plug-in hybrid electric vehicle
KW - Reinforcement learning
KW - Trajectory data mining
UR - http://www.scopus.com/inward/record.url?scp=85084056362&partnerID=8YFLogxK
U2 - 10.1016/j.apenergy.2020.115031
DO - 10.1016/j.apenergy.2020.115031
M3 - Article
AN - SCOPUS:85084056362
SN - 0306-2619
VL - 269
JO - Applied Energy
JF - Applied Energy
M1 - 115031
ER -