Power Management for a Plug-in Hybrid Electric Vehicle Based on Reinforcement Learning with Continuous State and Action Spaces

Yuecheng Li, Hongwen He, Jiankun Peng*, Hailong Zhang

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

39 Citations (Scopus)

Abstract

This paper presents a power management strategy for a plug-in hybrid electric vehicle based on reinforcement learning with continuous state and action spaces (Actor-Critic method, which has been highly successful in artificial intelligence field). Compared with discrete optimal methods, such as dynamic programming (DP) and Q-learning, the continuous method owns great potential in complex environments (much more sate variables) without worrying curse of dimensionality. A vehicle model is constructed for application of optimal algorithms, and power management problem is reformulated in accordance with Actor-Critic method. In order to guarantee the training process of proposed method to be quick and stable, stochastic gradient descent and experience replay is adopted. Both AC based method and DP based method are simulated on the same driving cycle. For one driving cycle, the total cost of a trained AC based method is only 2.76% higher than that of DP, while saving 88.7% of calculation time than that DP takes.

Original languageEnglish
Pages (from-to)2270-2275
Number of pages6
JournalEnergy Procedia
Volume142
DOIs
Publication statusPublished - 2017
Event9th International Conference on Applied Energy, ICAE 2017 - Cardiff, United Kingdom
Duration: 21 Aug 201724 Aug 2017

Keywords

  • Actor-Critic architecture
  • dynamic programming
  • plug-in hybrid electric vehicle
  • power management
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Power Management for a Plug-in Hybrid Electric Vehicle Based on Reinforcement Learning with Continuous State and Action Spaces'. Together they form a unique fingerprint.

Cite this