摘要
Deep reinforcement learning (DRL) holds great promise in enhancing the effectiveness of energy management strategies (EMSs) for hybrid electric vehicles (HEVs). However, online updating of the DRL-based EMSs remains a challenge, making it difficult to ensure their long-term optimization performance. Given that, this study proposes an online updating EMS to improve the long-term energy efficiency of the DRL-based EMS for a fuel cell hybrid electric bus, by exploiting the correlation mechanism between real-time traffic information and efficient hydrogen utilization. Specifically, future optimal safety speed is planned by adopting dynamic programming addressing coupled spatiotemporal constraints in traffic information. Furthermore, a knowledge-sharing mechanism is developed by leveraging transfer learning (TL) to reuse historical EMS for the planned future speed, enabling the continuous updating of the soft actor-critic based EMS. Finally, the updated EMS is deployed into the onboard controller to verify the real-time control effect via the processor-in-the-loop experiment. Results demonstrate that the proposed EMS enhances updating efficiency by 30.08 % compared to the non-TL-integrated EMS and reduces hydrogen consumption by 6.11 % compared to the static EMS. Moreover, the updated EMS can be deployed in real time in the onboard controller.
| 源语言 | 英语 |
|---|---|
| 文章编号 | 126902 |
| 期刊 | Applied Energy |
| 卷 | 402 |
| DOI | |
| 出版状态 | 已出版 - 15 12月 2025 |
指纹
探究 'UpdatingEMS: An online updating framework for deep reinforcement learning-based energy management of fuel cell hybrid electric bus with integrated transfer learning' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver