A Novel Experience Replay-Based Offline Deep Reinforcement Learning for Energy Management of Hybrid Electric Vehicles

Zegong Niu, Jingda Wu, Hongwen He*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Despite the deep reinforcement learning (DRL) techniques being extensively studied in developing energy management strategies (EMS) for hybrid electric vehicles (HEVs), the trial-and-error learning characteristic of conventional DRL requires interactive data collection to obtain the qualified strategy, which is infeasibility in the real world due to the unacceptable exploratory actions' costs. Offline DRL-based EMS is promising to improve the practicality as it could improve itself via offline collected data. However, the performance of existing offline DRL solutions is readily degraded due to the varying quality of training data. To overcome this problem, in this article, we propose a novel offline experience replay method to improve the adaptiveness and robustness of imperfect data and therefore enhance the energy-saving performance of associated EMSs. Specifically, the possibility of a specific piece of data being used is automatically tuned based on its contribution to the value function of DRL. A range of prevailing offline DRL algorithms have been used to validate the proposed experience replay method. Validation results suggest that the proposed method successfully broadly boosted all involved offline DRL algorithms in improving the EMS's fuel economy. A hardware-in-the-loop test is also conducted to validate the reliability of the method. The proposed method is promising to improve the practicability of offline DRL in EMS for HEVs and broader fields.

Original languageEnglish
JournalIEEE Transactions on Industrial Electronics
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Energy management strategy
  • experience replay
  • hybrid electric vehicles
  • offline deep reinforcement learning

Cite this