TY - JOUR
T1 - Tactical driving decisions of unmanned ground vehicles in complex highway environments
T2 - A deep reinforcement learning approach
AU - Wang, Huanjie
AU - Yuan, Shihua
AU - Guo, Mengyu
AU - Chan, Ching Yao
AU - Li, Xueyuan
AU - Lan, Wei
N1 - Publisher Copyright:
© IMechE 2020.
PY - 2021/3
Y1 - 2021/3
N2 - In this study, a deep reinforcement learning approach is proposed to handle tactical driving in complex highway traffic environments for unmanned ground vehicles. Tactical driving is a challenging topic for unmanned ground vehicles because of its interplay with routing decisions as well as real-time traffic dynamics. The core of our deep reinforcement learning approach is a deep Q-network that takes dynamic traffic information as input and outputs typical tactical driving decisions as action. The reward is designed with the consideration of successful highway exit, average traveling speed, and driving safety and comfort. In order to endow an unmanned ground vehicle with situational traffic information that is critical for tactical driving, the vehicle’s sensor information such as vehicle position and velocity are further augmented through the assessment of the ego-vehicle’s collision risk, potential field, and kinematics and used as input for the deep Q-network model. A convolutional neural network is built and fine-tuned to extract traffic features which facilitate the decision-making process of Q-learning. For model training and testing, a highway simulation platform is constructed with realistic parameter settings obtained from a real-world highway traffic dataset. The performance of the deep Q-network model is validated with extensive simulation experiments under different parameter settings such as traffic density and risk level. The results exhibit the important potentials of our deep Q-network model in learning challenging tactical driving decisions given multiple objectives and complex traffic environment.
AB - In this study, a deep reinforcement learning approach is proposed to handle tactical driving in complex highway traffic environments for unmanned ground vehicles. Tactical driving is a challenging topic for unmanned ground vehicles because of its interplay with routing decisions as well as real-time traffic dynamics. The core of our deep reinforcement learning approach is a deep Q-network that takes dynamic traffic information as input and outputs typical tactical driving decisions as action. The reward is designed with the consideration of successful highway exit, average traveling speed, and driving safety and comfort. In order to endow an unmanned ground vehicle with situational traffic information that is critical for tactical driving, the vehicle’s sensor information such as vehicle position and velocity are further augmented through the assessment of the ego-vehicle’s collision risk, potential field, and kinematics and used as input for the deep Q-network model. A convolutional neural network is built and fine-tuned to extract traffic features which facilitate the decision-making process of Q-learning. For model training and testing, a highway simulation platform is constructed with realistic parameter settings obtained from a real-world highway traffic dataset. The performance of the deep Q-network model is validated with extensive simulation experiments under different parameter settings such as traffic density and risk level. The results exhibit the important potentials of our deep Q-network model in learning challenging tactical driving decisions given multiple objectives and complex traffic environment.
KW - Intelligent vehicles
KW - deep reinforcement learning
KW - potential field
KW - safety assessment
KW - tactical driving decision
UR - http://www.scopus.com/inward/record.url?scp=85079181968&partnerID=8YFLogxK
U2 - 10.1177/0954407019898009
DO - 10.1177/0954407019898009
M3 - Article
AN - SCOPUS:85079181968
SN - 0954-4070
VL - 235
SP - 1113
EP - 1127
JO - Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
JF - Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
IS - 4
ER -