TY - JOUR
T1 - Type- and task-crossing energy management for fuel cell vehicles with longevity consideration
T2 - A heterogeneous deep transfer reinforcement learning framework
AU - Huang, Ruchen
AU - He, Hongwen
AU - Su, Qicong
AU - Härtl, Martin
AU - Jaensch, Malte
N1 - Publisher Copyright:
© 2024
PY - 2025/1/1
Y1 - 2025/1/1
N2 - The recent advancements in artificial intelligence have promoted deep reinforcement learning (DRL) as the preferred method for developing energy management strategies (EMSs) for fuel cell vehicles (FCVs). However, the development of DRL-based EMSs is a time-consuming process, requiring repetitive training when encountering different vehicle types or learning tasks. To surmount this technical barrier, this paper develops a transferable EMS rooted in heterogeneous deep transfer reinforcement learning (DTRL) across both FCV types and optimization tasks. Firstly, a simple source EMS based on the soft actor-critic (SAC) algorithm is pre-trained for a fuel cell sedan, solely focusing on hydrogen saving. After that, a heterogeneous DTRL framework is developed by integrating SAC with transfer learning, through which both heterogeneous deep neural networks and experience replay buffers can be transferred. Subsequently, the source EMS is transferred to the target new EMS of a fuel cell bus (FCB) to be reused, with additional consideration of the fuel cell (FC) longevity. Experimental simulations reveal that the heterogeneous DTRL framework expedites the development of the new EMS for FCB by 90.28 %. Moreover, the new EMS achieves a 7.93 % reduction in hydrogen consumption and suppresses FC degradation by 63.21 %. By correlating different energy management tasks of FCVs, this article both expedites the development and facilitates the generalized application of DRL-based EMSs.
AB - The recent advancements in artificial intelligence have promoted deep reinforcement learning (DRL) as the preferred method for developing energy management strategies (EMSs) for fuel cell vehicles (FCVs). However, the development of DRL-based EMSs is a time-consuming process, requiring repetitive training when encountering different vehicle types or learning tasks. To surmount this technical barrier, this paper develops a transferable EMS rooted in heterogeneous deep transfer reinforcement learning (DTRL) across both FCV types and optimization tasks. Firstly, a simple source EMS based on the soft actor-critic (SAC) algorithm is pre-trained for a fuel cell sedan, solely focusing on hydrogen saving. After that, a heterogeneous DTRL framework is developed by integrating SAC with transfer learning, through which both heterogeneous deep neural networks and experience replay buffers can be transferred. Subsequently, the source EMS is transferred to the target new EMS of a fuel cell bus (FCB) to be reused, with additional consideration of the fuel cell (FC) longevity. Experimental simulations reveal that the heterogeneous DTRL framework expedites the development of the new EMS for FCB by 90.28 %. Moreover, the new EMS achieves a 7.93 % reduction in hydrogen consumption and suppresses FC degradation by 63.21 %. By correlating different energy management tasks of FCVs, this article both expedites the development and facilitates the generalized application of DRL-based EMSs.
KW - Energy management strategy
KW - Fuel cell longevity
KW - Fuel cell vehicle
KW - Heterogeneous deep transfer reinforcement learning
KW - Soft actor-critic
UR - http://www.scopus.com/inward/record.url?scp=85205584105&partnerID=8YFLogxK
U2 - 10.1016/j.apenergy.2024.124594
DO - 10.1016/j.apenergy.2024.124594
M3 - Article
AN - SCOPUS:85205584105
SN - 0306-2619
VL - 377
JO - Applied Energy
JF - Applied Energy
M1 - 124594
ER -