Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle

Xuefeng Han, Hongwen He*, Jingda Wu, Jiankun Peng, Yuecheng Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

194 引用 (Scopus)

摘要

An energy management strategy, based on double deep Q-learning algorithm, is proposed for a dual-motor driven hybrid electric tracked-vehicle. Typical model framework of tracked-vehicle is established where the lateral dynamic can be taken into consideration. For the propose of optimizing the fuel consumption performance, a double deep Q-learning-based control structure is put forward. Compared to conventional deep Q-learning, the proposed strategy prevents training process falling into the overoptimistic estimate of policy value and highlights its significant advantages in terms of the iterative convergence rate and optimization performance. Unique observation states are selected as input variables of reinforcement learning algorithm in view of revealing tracked-vehicles characteristic. The conventional deep Q-learning and dynamic programming are also employed and compared with the proposed strategy for different driving schedules. Simulation results demonstrate the fuel economy of proposed methodology achieves 7.1% better than that of conventional deep Q learning-based strategy and reaches 93.2% level of Dynamic programing benchmark. Moreover, the designed algorithm has a good performance in battery SOC retention with different initial values.

源语言英语
文章编号113708
期刊Applied Energy
254
DOI
出版状态已出版 - 15 11月 2019

指纹

探究 'Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle' 的科研主题。它们共同构成独一无二的指纹。

引用此