TY - JOUR
T1 - Towards a fossil-free urban transport system
T2 - An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning
AU - Huang, Ruchen
AU - He, Hongwen
AU - Su, Qicong
N1 - Publisher Copyright:
© 2024
PY - 2024/6/1
Y1 - 2024/6/1
N2 - Deep reinforcement learning (DRL) is now a research focus for the energy management of fuel cell vehicles (FCVs) to improve hydrogen utilization efficiency. However, since DRL-based energy management strategies (EMSs) need to be retrained when the types of FCVs are changed, it is a laborious task to develop DRL-based EMSs for different FCVs. Given that, this article introduces transfer learning (TL) into DRL to design a novel deep transfer reinforcement learning (DTRL) method and then innovatively proposes an intelligent transferable energy management framework between two different urban FCVs based on the designed DTRL method to achieve the reuse of well-trained EMSs. To begin, an enhanced soft actor-critic (SAC) algorithm integrating prioritized experience replay (PER) is formulated to be the studied DRL algorithm in this article. Then, an enhanced-SAC based EMS of a light fuel cell hybrid electric vehicle (FCHEV) is pre-trained by using massive real-world driving data. After that, the learned knowledge stored in the FCHEV's well-trained EMS is captured and then transferred into the EMS of a heavy-duty fuel cell hybrid electric bus (FCHEB). Finally, the FCHEB's EMS is fine-tuned in a stochastic environment to ensure adaptability to real driving conditions. Simulation results indicate that, compared to the state-of-the-art baseline EMS, the proposed DTRL-based EMS accelerates the convergence speed by 91.55% and improves the fuel economy by 6.78%. This article contributes to shortening the development cycle of DRL-based EMSs and improving the utilization efficiency of hydrogen energy in the urban transport sector.
AB - Deep reinforcement learning (DRL) is now a research focus for the energy management of fuel cell vehicles (FCVs) to improve hydrogen utilization efficiency. However, since DRL-based energy management strategies (EMSs) need to be retrained when the types of FCVs are changed, it is a laborious task to develop DRL-based EMSs for different FCVs. Given that, this article introduces transfer learning (TL) into DRL to design a novel deep transfer reinforcement learning (DTRL) method and then innovatively proposes an intelligent transferable energy management framework between two different urban FCVs based on the designed DTRL method to achieve the reuse of well-trained EMSs. To begin, an enhanced soft actor-critic (SAC) algorithm integrating prioritized experience replay (PER) is formulated to be the studied DRL algorithm in this article. Then, an enhanced-SAC based EMS of a light fuel cell hybrid electric vehicle (FCHEV) is pre-trained by using massive real-world driving data. After that, the learned knowledge stored in the FCHEV's well-trained EMS is captured and then transferred into the EMS of a heavy-duty fuel cell hybrid electric bus (FCHEB). Finally, the FCHEB's EMS is fine-tuned in a stochastic environment to ensure adaptability to real driving conditions. Simulation results indicate that, compared to the state-of-the-art baseline EMS, the proposed DTRL-based EMS accelerates the convergence speed by 91.55% and improves the fuel economy by 6.78%. This article contributes to shortening the development cycle of DRL-based EMSs and improving the utilization efficiency of hydrogen energy in the urban transport sector.
KW - Deep reinforcement learning
KW - Energy management strategy
KW - Fuel cell hybrid electric vehicle
KW - Prioritized experience replay
KW - Soft actor-critic
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85189038991&partnerID=8YFLogxK
U2 - 10.1016/j.apenergy.2024.123080
DO - 10.1016/j.apenergy.2024.123080
M3 - Article
AN - SCOPUS:85189038991
SN - 0306-2619
VL - 363
JO - Applied Energy
JF - Applied Energy
M1 - 123080
ER -