TY - JOUR
T1 - A unified deep reinforcement learning energy management strategy for multi-powertrain vehicles based on meta learning and hard sample mining
AU - Chen, Xiaokai
AU - Wu, Zhiming
AU - Karimi, Hamid Reza
AU - Li, Qianhui
AU - Li, Zhengyu
N1 - Publisher Copyright:
© 2025
PY - 2025/10
Y1 - 2025/10
N2 - Hybrid electric vehicles (HEVs) encompass diverse powertrain configurations and serve varied purposes. Commonly, energy management strategies (EMSs) have been developed separately for individual vehicle types and powertrain configurations under specific operating scenarios, often lacking generalizability across vehicle models and operating scenarios. To fill this gap, we propose a unified deep reinforcement learning (DRL) EMS based on meta-learning and online hard sample mining. This strategy enables adaptation to diverse vehicle types and powertrain configurations with minimal sample training through online fine-tuning. Firstly, meta-reinforcement learning is employed to simultaneously learn EMS for multiple vehicle types across various operating scenarios, establishing a base-learner capable of achieving satisfactory performance with minor adjustments when confronted with new configurations and operating scenarios. Furthermore, to mitigate the slow convergence associated with training multiple vehicle types and operating scenarios concurrently, hard sample mining method is used to optimize the presentation of random operating scenarios during training. This entails recording poorly performing conditions during training and prioritizing the training of simpler conditions before advancing to more challenging ones, thereby enhancing training efficiency through a scientifically informed approach. Additionally, we validate the proposed EMS on a simulated vehicle emulator. Results demonstrate a significant improvement in convergence efficiency, with respective enhancements of 40% in convergence efficiency while achieving comparable final performance metrics.
AB - Hybrid electric vehicles (HEVs) encompass diverse powertrain configurations and serve varied purposes. Commonly, energy management strategies (EMSs) have been developed separately for individual vehicle types and powertrain configurations under specific operating scenarios, often lacking generalizability across vehicle models and operating scenarios. To fill this gap, we propose a unified deep reinforcement learning (DRL) EMS based on meta-learning and online hard sample mining. This strategy enables adaptation to diverse vehicle types and powertrain configurations with minimal sample training through online fine-tuning. Firstly, meta-reinforcement learning is employed to simultaneously learn EMS for multiple vehicle types across various operating scenarios, establishing a base-learner capable of achieving satisfactory performance with minor adjustments when confronted with new configurations and operating scenarios. Furthermore, to mitigate the slow convergence associated with training multiple vehicle types and operating scenarios concurrently, hard sample mining method is used to optimize the presentation of random operating scenarios during training. This entails recording poorly performing conditions during training and prioritizing the training of simpler conditions before advancing to more challenging ones, thereby enhancing training efficiency through a scientifically informed approach. Additionally, we validate the proposed EMS on a simulated vehicle emulator. Results demonstrate a significant improvement in convergence efficiency, with respective enhancements of 40% in convergence efficiency while achieving comparable final performance metrics.
KW - Deep reinforcement learning
KW - Energy management strategy
KW - Hybrid electric vehicle
KW - Meta learning
UR - http://www.scopus.com/inward/record.url?scp=105005396090&partnerID=8YFLogxK
U2 - 10.1016/j.conengprac.2025.106396
DO - 10.1016/j.conengprac.2025.106396
M3 - Article
AN - SCOPUS:105005396090
SN - 0967-0661
VL - 163
JO - Control Engineering Practice
JF - Control Engineering Practice
M1 - 106396
ER -