TY - GEN
T1 - Deep Q-Learning Based Energy Management Strategy for a Series Hybrid Electric Tracked Vehicle and Its Adaptability Validation
AU - He, Dingbo
AU - Zou, Yuan
AU - Wu, Jinlong
AU - Zhang, Xudong
AU - Zhang, Zhigang
AU - Wang, Ruizhi
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - In this paper, a novel deep Q-learning (DQL) algorithm based energy management strategy for a series hybrid tracked electric vehicle (SHETV) is proposed. Initially, the configurations of the SHETV powertrain are introduced, then its system model is established accordingly, and the energy management problem is formulated. Secondly, the energy management control policy based on DQL algorithm is developed. Given the curse of dimensionality problem of conventional reinforcement learning (RL) strategy, two deep Q-Networks with identical structure and initial weights are built and trained to approximate the action-value function and improve robustness of the whole model. Then the DQL-based strategy is trained and validated by using driving cycle data collected in real world, and results show that the DQL-based strategy performs better in cutting down fuel consumption by approximately 5.9% compared with the traditional RL strategy. Finally, a new driving cycle is executed on the trained DQL model and applied to retrain the RL model for comparison. The result indicates that the DQL strategy consumes about 6.34% less of fuel than the RL strategy, which confirms the adaptability of the DQL strategy consequently.
AB - In this paper, a novel deep Q-learning (DQL) algorithm based energy management strategy for a series hybrid tracked electric vehicle (SHETV) is proposed. Initially, the configurations of the SHETV powertrain are introduced, then its system model is established accordingly, and the energy management problem is formulated. Secondly, the energy management control policy based on DQL algorithm is developed. Given the curse of dimensionality problem of conventional reinforcement learning (RL) strategy, two deep Q-Networks with identical structure and initial weights are built and trained to approximate the action-value function and improve robustness of the whole model. Then the DQL-based strategy is trained and validated by using driving cycle data collected in real world, and results show that the DQL-based strategy performs better in cutting down fuel consumption by approximately 5.9% compared with the traditional RL strategy. Finally, a new driving cycle is executed on the trained DQL model and applied to retrain the RL model for comparison. The result indicates that the DQL strategy consumes about 6.34% less of fuel than the RL strategy, which confirms the adaptability of the DQL strategy consequently.
KW - Deep Q-Learning (DQL)
KW - energy management strategy
KW - reinforcement learning
KW - series hybrid electric tracked vehicle (SHETV)
UR - http://www.scopus.com/inward/record.url?scp=85071326251&partnerID=8YFLogxK
U2 - 10.1109/ITEC.2019.8790630
DO - 10.1109/ITEC.2019.8790630
M3 - Conference contribution
AN - SCOPUS:85071326251
T3 - ITEC 2019 - 2019 IEEE Transportation Electrification Conference and Expo
BT - ITEC 2019 - 2019 IEEE Transportation Electrification Conference and Expo
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE Transportation Electrification Conference and Expo, ITEC 2019
Y2 - 19 June 2019 through 21 June 2019
ER -