TY - JOUR
T1 - Customized Energy Management for Fuel Cell Electric Vehicle Based on Deep Reinforcement Learning-Model Predictive Control Self-Regulation Framework
AU - Quan, Shengwei
AU - He, Hongwen
AU - Wei, Zhongbao
AU - Chen, Jinzhou
AU - Zhang, Zhendong
AU - Wang, Ya Xiong
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Deep reinforcement learning (DRL) has been widely used in the field of automotive energy management. However, DRL is computationally inefficient and less robust, making it difficult to be applied to practical systems. In this article, a customized energy management strategy based on the deep reinforcement learning-model predictive control (DRL-MPC) self-regulation framework is proposed for fuel cell electric vehicles. The soft actor critic (SAC) algorithm is used to train the energy management strategy offline, which minimizes system comprehensive consumption and lifetime degradation. The trained SAC policy outputs the sequence of fuel cell actions at different states in the prediction horizon as the initial value of the nonlinear MPC solution. Under the MPC framework, iterative computation is carried out for nonlinear optimization problems to optimize action sequences based on SAC policy. In addition, the vehicle's usual operation dataset is collected to customize the update package for further improvement of the energy management effect. The DRL-MPC can optimize the SAC policy action at the state boundary to reduce system lifetime degradation. The proposed strategy also shows better optimization robustness than SAC strategy under different vehicle loads. Moreover, after the update package application, the total cost is reduced by 5.93% compared with SAC strategy, which has better optimization under comprehensive condition with different vehicle loads.
AB - Deep reinforcement learning (DRL) has been widely used in the field of automotive energy management. However, DRL is computationally inefficient and less robust, making it difficult to be applied to practical systems. In this article, a customized energy management strategy based on the deep reinforcement learning-model predictive control (DRL-MPC) self-regulation framework is proposed for fuel cell electric vehicles. The soft actor critic (SAC) algorithm is used to train the energy management strategy offline, which minimizes system comprehensive consumption and lifetime degradation. The trained SAC policy outputs the sequence of fuel cell actions at different states in the prediction horizon as the initial value of the nonlinear MPC solution. Under the MPC framework, iterative computation is carried out for nonlinear optimization problems to optimize action sequences based on SAC policy. In addition, the vehicle's usual operation dataset is collected to customize the update package for further improvement of the energy management effect. The DRL-MPC can optimize the SAC policy action at the state boundary to reduce system lifetime degradation. The proposed strategy also shows better optimization robustness than SAC strategy under different vehicle loads. Moreover, after the update package application, the total cost is reduced by 5.93% compared with SAC strategy, which has better optimization under comprehensive condition with different vehicle loads.
KW - Customized energy management
KW - fuel cell and battery degradation
KW - fuel cell electric vehicle
KW - model predictive control
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85211504321&partnerID=8YFLogxK
U2 - 10.1109/TII.2024.3435359
DO - 10.1109/TII.2024.3435359
M3 - Article
AN - SCOPUS:85211504321
SN - 1551-3203
VL - 20
SP - 13776
EP - 13785
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
IS - 12
ER -