TY - JOUR
T1 - Longevity-conscious energy management strategy of fuel cell hybrid electric Vehicle Based on deep reinforcement learning
AU - Tang, Xiaolin
AU - Zhou, Haitao
AU - Wang, Feng
AU - Wang, Weida
AU - Lin, Xianke
N1 - Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Deep reinforcement learning-based energy management strategy play an essential role in improving fuel economy and extending fuel cell lifetime for fuel cell hybrid electric vehicles. In this work, the traditional Deep Q-Network is compared with the Deep Q-Network with prioritized experience replay. Furthermore, the Deep Q-Network with prioritized experience replay is designed for energy management strategy to minimize hydrogen consumption and compared with the dynamic programming. Moreover, the fuel cell system degradation is incorporated into the objective function, and a balance between fuel economy and fuel cell system degradation is achieved by adjusting the degradation weight and the hydrogen consumption weight. Finally, the combined driving cycle is selected to further verify the effectiveness of the proposed strategy in unfamiliar driving environments and untrained situations. The training results under UDDS show that the fuel economy of the EMS decreases by 0.53 % when fuel cell system degradation is considered, reaching 88.73 % of the DP-based EMS in the UDDS, and the degradation of fuel cell system is effectively suppressed. At the same time, the computational efficiency is improved by more than 70 % compared to the DP-based strategy.
AB - Deep reinforcement learning-based energy management strategy play an essential role in improving fuel economy and extending fuel cell lifetime for fuel cell hybrid electric vehicles. In this work, the traditional Deep Q-Network is compared with the Deep Q-Network with prioritized experience replay. Furthermore, the Deep Q-Network with prioritized experience replay is designed for energy management strategy to minimize hydrogen consumption and compared with the dynamic programming. Moreover, the fuel cell system degradation is incorporated into the objective function, and a balance between fuel economy and fuel cell system degradation is achieved by adjusting the degradation weight and the hydrogen consumption weight. Finally, the combined driving cycle is selected to further verify the effectiveness of the proposed strategy in unfamiliar driving environments and untrained situations. The training results under UDDS show that the fuel economy of the EMS decreases by 0.53 % when fuel cell system degradation is considered, reaching 88.73 % of the DP-based EMS in the UDDS, and the degradation of fuel cell system is effectively suppressed. At the same time, the computational efficiency is improved by more than 70 % compared to the DP-based strategy.
KW - DQN algorithm
KW - Deep reinforcement learning
KW - Degradation
KW - Energy management strategy
KW - Fuel cell hybrid electric vehicles
KW - Prioritized experience replay
UR - https://www.scopus.com/pages/publications/85111986976
U2 - 10.1016/j.energy.2021.121593
DO - 10.1016/j.energy.2021.121593
M3 - Article
AN - SCOPUS:85111986976
SN - 0360-5442
VL - 238
JO - Energy
JF - Energy
M1 - 121593
ER -