VISION-AIDED DEEP REINFORCEMENT LEARNING FOR ENERGY MANAGEMENT OF HYBRID ELECTRIC VEHICLES

Yong Wang, Yuankai Wu*, Jiankun Peng*, Huachun Tan, Dechong Zeng, Hongwen He

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper introduces an energy management strategy that combines visual perception and deep reinforcement learning (DRL) algorithms to minimize fuel consumption. The proposed method is capable of autonomously learning the optimal control policy without any prediction efforts. We used a monocular camera in the windshield of a car to catch visual information as inputs. Next, we used state-of-the-art convolutional neural networks based object detection methods to detect and classify traffic light. The traffic light information is used as a state input for a model-free deep reinforcement learning based energy management system with continuous control action. Hence, the traffic light information is incorporated into the energy management system. The experimental results indicate that the fuel economy of the proposed vision-aided strategy achieves 94.5% of dynamic programming-based method’s, and is 6.8% better than that of the original DRL algorithm without traffic light information under a real-world driving cycle.

Original languageEnglish
JournalEnergy Proceedings
Volume3
DOIs
Publication statusPublished - 2019
Event11th International Conference on Applied Energy, ICAE 2019 - Västerås, Sweden
Duration: 12 Aug 201915 Aug 2019

Keywords

  • deep reinforcement learning
  • energy management strategy
  • hybrid electric vehicle
  • traffic light
  • visual perception

Fingerprint

Dive into the research topics of 'VISION-AIDED DEEP REINFORCEMENT LEARNING FOR ENERGY MANAGEMENT OF HYBRID ELECTRIC VEHICLES'. Together they form a unique fingerprint.

Cite this