智能网联汽车基于逆强化学习的轨迹规划优化机制研究

Translated title of the contribution: Research on Inverse Reinforcement Learning-Based Trajectory Planning Optimization Mechanism for Autonomous Connected Vehicles

Haonan Peng, Minghuan Tang, Qiwen Zha, Cong Wang*, Weida Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Trajectory planning is one of the most significant technologies of autonomous connected vehicles. However, there are some problems in existing trajectory planning strategy, for example, weak real-time ability, difficult to calibrate weighting coefficients of optimization objectives and the poor interpretability for direct imitation learning method in the trajectory planning strategy. Therefore, an inverse reinforcement learning (IRL) method was proposed based on the maximum entropy principle in this paper. Learning the underlying optimization mechanism of driving trajectories from experienced drivers, the planning of lane-changing expert trajectories was achieved aligning with the human driving experience, laying a theoretical foundation for solving the real-time and interpretability problems of trajectory planning methods. Finally, taking general risk scenarios and high-risk scenarios as application cases respectively, the feasibility and effectiveness of the proposed trajectory planning method were validated through Matlab/Simulink simulations.

Translated title of the contributionResearch on Inverse Reinforcement Learning-Based Trajectory Planning Optimization Mechanism for Autonomous Connected Vehicles
Original languageChinese (Traditional)
Pages (from-to)820-831
Number of pages12
JournalBeijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology
Volume43
Issue number8
DOIs
Publication statusPublished - Aug 2023

Fingerprint

Dive into the research topics of 'Research on Inverse Reinforcement Learning-Based Trajectory Planning Optimization Mechanism for Autonomous Connected Vehicles'. Together they form a unique fingerprint.

Cite this