跳到主要导航 跳到搜索 跳到主要内容

Stabilization Approaches for Reinforcement Learning-Based End-to-End Autonomous Driving

  • Beijing Institute of Technology
  • Shanghai Jiao Tong University
  • Nanjing University of Science and Technology

科研成果: 期刊稿件文章同行评审

摘要

Deep reinforcement learning (DRL) has been successfully applied to end-to-end autonomous driving, especially in simulation environments. However, common DRL approaches used in complex autonomous driving scenarios sometimes are unstable or difficult to converge. This paper proposes two approaches to improve the stability of the policy model training with as few manual data as possible. For the first approach, reinforcement learning is combined with imitation learning to train a feature network with a small amount of manual data for parameters initialization. For the second approach, an auxiliary network is added to the reinforcement learning framework, which can leverage the real-time measurement information to deepen the understanding of environment, without any guide of demonstrators. To verify the effectiveness of these two approaches, simulations in image information-based and lidar information-based end-to-end autonomous driving systems are conducted, respectively. These approaches are not only tested in the virtual game world, but also applied in Gazebo, in which we build a 3D world based on the real vehicle model of Ranger XP900 platform, the real 3D obstacle model, and the real motion constraints with inertial characteristics, so as to ensure that the trained end-to-end autonomous driving model is more suitable for the real world. Experimental results show that the performance is increased by over 45% in the virtual game world, and can converge quickly and stably in Gazebo in which previous methods can hardly converge.

源语言英语
文章编号9028159
页(从-至)4740-4750
页数11
期刊IEEE Transactions on Vehicular Technology
69
5
DOI
出版状态已出版 - 5月 2020

指纹

探究 'Stabilization Approaches for Reinforcement Learning-Based End-to-End Autonomous Driving' 的科研主题。它们共同构成独一无二的指纹。

引用此