TY - JOUR
T1 - Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework
AU - Huang, Ruchen
AU - He, Hongwen
AU - Gao, Miaojue
N1 - Publisher Copyright:
© 2023
PY - 2023/9/15
Y1 - 2023/9/15
N2 - Deep reinforcement learning (DRL) has become the mainstream method to design intelligent energy management strategies (EMSs) for fuel cell hybrid electric vehicles with the prosperity of artificial intelligence in recent years. Conventional DRL algorithms are suffering from low sampling efficiency and unsatisfactory utilization of computing resources. Combined with distributed architecture and parallel computation, DRL algorithms can be more efficient. Given that, this paper proposes a novel distributed DRL-based energy management framework for a fuel cell hybrid electric bus (FCHEB) to shorten the development cycle of DRL-based EMSs while reducing the total operation cost of the FCHEB. To begin, to make full use of the limited computing resources, a novel asynchronous advantage actor-critic (A3C)-based energy management framework is designed by innovatively integrating with the multi-process parallel computation technique. Then, a promising EMS considering the extra operation cost caused by fuel cell degradation and battery aging is designed based on this novel framework. Furthermore, EMSs based on a conventional DRL algorithm, advantage actor-critic (A2C), and another conventional distributed DRL framework, multi-thread A3C, are employed as baselines, and the performance of the proposed EMS is evaluated by training and testing using different driving cycles. Simulation results indicate that compared to EMSs based on A2C and multi-thread A3C, the proposed EMS can efficiently accelerate the convergence speed respectively by 87.46% and 88.92%, and reduce the total operation cost respectively by 44.83% and 41.19%. The main contribution of this article is to explore the integration of multi-process parallel computation in a distributed DRL-based EMS for a fuel cell vehicle for more efficient utilization of hydrogen energy in the transportation sector.
AB - Deep reinforcement learning (DRL) has become the mainstream method to design intelligent energy management strategies (EMSs) for fuel cell hybrid electric vehicles with the prosperity of artificial intelligence in recent years. Conventional DRL algorithms are suffering from low sampling efficiency and unsatisfactory utilization of computing resources. Combined with distributed architecture and parallel computation, DRL algorithms can be more efficient. Given that, this paper proposes a novel distributed DRL-based energy management framework for a fuel cell hybrid electric bus (FCHEB) to shorten the development cycle of DRL-based EMSs while reducing the total operation cost of the FCHEB. To begin, to make full use of the limited computing resources, a novel asynchronous advantage actor-critic (A3C)-based energy management framework is designed by innovatively integrating with the multi-process parallel computation technique. Then, a promising EMS considering the extra operation cost caused by fuel cell degradation and battery aging is designed based on this novel framework. Furthermore, EMSs based on a conventional DRL algorithm, advantage actor-critic (A2C), and another conventional distributed DRL framework, multi-thread A3C, are employed as baselines, and the performance of the proposed EMS is evaluated by training and testing using different driving cycles. Simulation results indicate that compared to EMSs based on A2C and multi-thread A3C, the proposed EMS can efficiently accelerate the convergence speed respectively by 87.46% and 88.92%, and reduce the total operation cost respectively by 44.83% and 41.19%. The main contribution of this article is to explore the integration of multi-process parallel computation in a distributed DRL-based EMS for a fuel cell vehicle for more efficient utilization of hydrogen energy in the transportation sector.
KW - Asynchronous advantage actor-critic (A3C)
KW - Distributed deep reinforcement learning
KW - Energy management strategy
KW - Fuel cell hybrid electric bus
KW - Multi-process parallel computation
UR - http://www.scopus.com/inward/record.url?scp=85161680654&partnerID=8YFLogxK
U2 - 10.1016/j.apenergy.2023.121358
DO - 10.1016/j.apenergy.2023.121358
M3 - Article
AN - SCOPUS:85161680654
SN - 0306-2619
VL - 346
JO - Applied Energy
JF - Applied Energy
M1 - 121358
ER -