TY - JOUR
T1 - Equipping With Cognition
T2 - Interactive Motion Planning Using Metacognitive-Attribution Inspired Reinforcement Learning for Autonomous Vehicles
AU - Hou, Xiaohui
AU - Gan, Minggang
AU - Wu, Wei
AU - Ji, Yuan
AU - Zhao, Shiyue
AU - Chen, Jie
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2025
Y1 - 2025
N2 - This study introduces the Metacognitive-Attribution Inspired Reinforcement Learning (MAIRL) approach, designed to address unprotected interactive left turns at intersections - one of the most challenging tasks in autonomous driving. By integrating the Metacognitive Theory and Attribution Theory from the psychology field with reinforcement learning, this study enriches the learning mechanisms of autonomous vehicles with human cognitive processes. Specifically, it applies Metacognitive Theory's three core elements - Metacognitive Knowledge, Metacognitive Monitoring, and Metacognitive Reflection - to enhance the control framework's capabilities in skill differentiation, real-time assessment, and adaptive learning for interactive motion planning. Furthermore, inspired by Attribution Theory, it decomposes the reward system in RL algorithms into three components: 1) skill improvement, 2) existing ability, and 3) environmental stochasticity. This framework emulates human learning and behavior adjustment, incorporating a deeper cognitive emulation into reinforcement algorithms to foster a unified cognitive structure and control strategy. Contrastive tests conducted in various intersection scenarios with differing traffic densities demonstrated the superior performance of the proposed controller, which outperformed baseline algorithms in success rates and had lower collision and timeout incidents. This interdisciplinary approach not only enhances the understanding and applicability of RL algorithms but also represents a meaningful step towards modeling advanced human cognitive processes in the field of autonomous driving.
AB - This study introduces the Metacognitive-Attribution Inspired Reinforcement Learning (MAIRL) approach, designed to address unprotected interactive left turns at intersections - one of the most challenging tasks in autonomous driving. By integrating the Metacognitive Theory and Attribution Theory from the psychology field with reinforcement learning, this study enriches the learning mechanisms of autonomous vehicles with human cognitive processes. Specifically, it applies Metacognitive Theory's three core elements - Metacognitive Knowledge, Metacognitive Monitoring, and Metacognitive Reflection - to enhance the control framework's capabilities in skill differentiation, real-time assessment, and adaptive learning for interactive motion planning. Furthermore, inspired by Attribution Theory, it decomposes the reward system in RL algorithms into three components: 1) skill improvement, 2) existing ability, and 3) environmental stochasticity. This framework emulates human learning and behavior adjustment, incorporating a deeper cognitive emulation into reinforcement algorithms to foster a unified cognitive structure and control strategy. Contrastive tests conducted in various intersection scenarios with differing traffic densities demonstrated the superior performance of the proposed controller, which outperformed baseline algorithms in success rates and had lower collision and timeout incidents. This interdisciplinary approach not only enhances the understanding and applicability of RL algorithms but also represents a meaningful step towards modeling advanced human cognitive processes in the field of autonomous driving.
KW - attribution theory
KW - autonomous vehicles
KW - Interactive motion planning
KW - metacognitive theory
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=105001060691&partnerID=8YFLogxK
U2 - 10.1109/TITS.2024.3520514
DO - 10.1109/TITS.2024.3520514
M3 - Article
AN - SCOPUS:105001060691
SN - 1524-9050
VL - 26
SP - 4178
EP - 4191
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 3
ER -