TY - GEN
T1 - Motion Planning for Autonomous Vehicles in Uncertain Environments Using Hierarchical Distributional Reinforcement Learning
AU - Chen, Xuemei
AU - Yang, Yixuan
AU - Xu, Shuyuan
AU - Fu, Shuaiqi
AU - Yang, Dongqing
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Safe and effective motion planning is essential for autonomous vehicles to successfully drive in complex and dynamic urban environments. However, most current methods lack considering the collision risk caused by obstacle occlusion and only consider longitudinal speed planning, which leads to overly conservative motion. The motion planning model proposed in this paper can consider the lateral motion of the vehicle while considering the risk of collision, improving safety and motion flexibility. It integrates distributional reinforcement learning with the path-speed decoupling scheme, yielding a hierarchical distributional reinforcement learning iterative optimization motion planning model. The high-level layer for path planning uses distributional reinforcement learning to choose local path points based on scattered point sampling. The low-level layer uses distributional reinforcement learning to adjust speed for each time step. These two layers achieve optimal performance through an iterative optimization method. The proposed model is trained and tested using the CARLA simulation platform in the scene where a pedestrian suddenly appear from the blind spot. The results reveal that, in comparison to the method that just employs speed planning, the suggested model's success rate is increased to 99.75% and the travel speed is increased by 14.88%. The model is also verified based on actual driving data. It is proven that the model can avoid risks brought on by limited perception and has a flexible response capability to achieve efficient traffic.
AB - Safe and effective motion planning is essential for autonomous vehicles to successfully drive in complex and dynamic urban environments. However, most current methods lack considering the collision risk caused by obstacle occlusion and only consider longitudinal speed planning, which leads to overly conservative motion. The motion planning model proposed in this paper can consider the lateral motion of the vehicle while considering the risk of collision, improving safety and motion flexibility. It integrates distributional reinforcement learning with the path-speed decoupling scheme, yielding a hierarchical distributional reinforcement learning iterative optimization motion planning model. The high-level layer for path planning uses distributional reinforcement learning to choose local path points based on scattered point sampling. The low-level layer uses distributional reinforcement learning to adjust speed for each time step. These two layers achieve optimal performance through an iterative optimization method. The proposed model is trained and tested using the CARLA simulation platform in the scene where a pedestrian suddenly appear from the blind spot. The results reveal that, in comparison to the method that just employs speed planning, the suggested model's success rate is increased to 99.75% and the travel speed is increased by 14.88%. The model is also verified based on actual driving data. It is proven that the model can avoid risks brought on by limited perception and has a flexible response capability to achieve efficient traffic.
KW - autonomous vehicle
KW - hierarchical distributional reinforcement learning
KW - motion planning
KW - uncertain environments
UR - http://www.scopus.com/inward/record.url?scp=85200380094&partnerID=8YFLogxK
U2 - 10.1109/CCDC62350.2024.10587952
DO - 10.1109/CCDC62350.2024.10587952
M3 - Conference contribution
AN - SCOPUS:85200380094
T3 - Proceedings of the 36th Chinese Control and Decision Conference, CCDC 2024
SP - 1844
EP - 1851
BT - Proceedings of the 36th Chinese Control and Decision Conference, CCDC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 36th Chinese Control and Decision Conference, CCDC 2024
Y2 - 25 May 2024 through 27 May 2024
ER -