摘要
In this paper, we focus on the problem of highway merge via parallel-type on-ramp for autonomous vehicles (AVs) in a decentralized non-cooperative way. This problem is challenging because of the highly dynamic and complex road environments. A deep reinforcement learning-based approach is proposed. The kernel of this approach is a Deep Q-Network (DQN) that takes dynamic traffic state as input and outputs actions including longitudinal acceleration (or deceleration) and lane merge. The total reward for this on-ramp merge problem consists of three parts, which are the merge success reward, the merge safety reward, and the merge efficiency reward. For model training and testing, we construct a highway on-ramp merging simulation experiments with realistic driving parameters. The experimental results show that the proposed approach can make reasonable merging decisions based on the observation of the traffic environment. We also compare our approach with a state-of-the-art approach and the superior performance of our approach is demonstrated by making challenging merging decisions in complex highway parallel-type on-ramp merging scenarios.
源语言 | 英语 |
---|---|
页(从-至) | 2726-2739 |
页数 | 14 |
期刊 | Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering |
卷 | 235 |
期 | 10-11 |
DOI | |
出版状态 | 已出版 - 9月 2021 |