TY - JOUR
T1 - Personalized Decision-Making Framework for Collaborative Lane Change and Speed Control Based on Deep Reinforcement Learning
AU - Peng, Jiankun
AU - Yu, Sichen
AU - Ge, Yuming
AU - Li, Shen
AU - Fan, Yi
AU - Zhou, Jiaxuan
AU - He, Hongwen
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2025
Y1 - 2025
N2 - Autonomous driving (AD) is critically dependent on intelligent decision-making technology, which is the crucial ingredient in driving safety and overall vehicle performance. And comprehensive consideration of driving heterogeneity, decision synergy, and game interaction is also the cornerstones. Accordingly, this paper constructs a cooperative decision-making framework for autonomous vehicles (AVs) that integrates driving styles within a hierarchical architecture based on deep reinforcement learning (DRL). The upper layer adopts the action shielding mechanism-based dueling-double deep Q-network (D3QN) algorithm incorporating the lane advantages into shared state space to complete the prompt lane-changing (LC) decision, the lower layer applies the soft actor-3-critic (SA3C) algorithm based on the clipped triple Q-learning to provide the continuous speed adaptive control. Three personalized collaborative decision strategies are formulated for particular driving styles in multi-objective optimization preference combined with style-incentive prioritized experience replay (SIPER). The experimental results confirm that the proposed framework can satisfy the personalized driving demands in complex traffic scenarios, effectively explore the prospective LC opportunities, and enhance the driving efficiency by 35.40% with aggressive strategy and the comfort by 56.46% with defensive strategy compared with normal strategy, while maintaining the safety.
AB - Autonomous driving (AD) is critically dependent on intelligent decision-making technology, which is the crucial ingredient in driving safety and overall vehicle performance. And comprehensive consideration of driving heterogeneity, decision synergy, and game interaction is also the cornerstones. Accordingly, this paper constructs a cooperative decision-making framework for autonomous vehicles (AVs) that integrates driving styles within a hierarchical architecture based on deep reinforcement learning (DRL). The upper layer adopts the action shielding mechanism-based dueling-double deep Q-network (D3QN) algorithm incorporating the lane advantages into shared state space to complete the prompt lane-changing (LC) decision, the lower layer applies the soft actor-3-critic (SA3C) algorithm based on the clipped triple Q-learning to provide the continuous speed adaptive control. Three personalized collaborative decision strategies are formulated for particular driving styles in multi-objective optimization preference combined with style-incentive prioritized experience replay (SIPER). The experimental results confirm that the proposed framework can satisfy the personalized driving demands in complex traffic scenarios, effectively explore the prospective LC opportunities, and enhance the driving efficiency by 35.40% with aggressive strategy and the comfort by 56.46% with defensive strategy compared with normal strategy, while maintaining the safety.
KW - deep reinforcement learning
KW - driving style
KW - experience replay technique
KW - Integrated decision-making
KW - multi-objective
UR - http://www.scopus.com/inward/record.url?scp=105006523296&partnerID=8YFLogxK
U2 - 10.1109/TITS.2025.3569592
DO - 10.1109/TITS.2025.3569592
M3 - Article
AN - SCOPUS:105006523296
SN - 1524-9050
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
ER -