TY - JOUR
T1 - Toward Human-in-the-Loop AI
T2 - Enhancing Deep Reinforcement Learning via Real-Time Human Guidance for Autonomous Driving
AU - Wu, Jingda
AU - Huang, Zhiyu
AU - Hu, Zhongxu
AU - Lv, Chen
N1 - Publisher Copyright:
© 2022 THE AUTHOR
PY - 2023/2
Y1 - 2023/2
N2 - Due to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algorithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent's unreasonable actions in real time when necessary during the model training process. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent's training loop, further improving the efficiency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algorithm under human guidance without imposing specific requirements on participants’ expertise or experience.
AB - Due to its limited intelligence and abilities, machine learning is currently unable to handle various situations thus cannot completely replace humans in real-world applications. Because humans exhibit robustness and adaptability in complex scenarios, it is crucial to introduce humans into the training loop of artificial intelligence (AI), leveraging human intelligence to further advance machine learning algorithms. In this study, a real-time human-guidance-based (Hug)-deep reinforcement learning (DRL) method is developed for policy training in an end-to-end autonomous driving case. With our newly designed mechanism for control transfer between humans and automation, humans are able to intervene and correct the agent's unreasonable actions in real time when necessary during the model training process. Based on this human-in-the-loop guidance mechanism, an improved actor-critic architecture with modified policy and value networks is developed. The fast convergence of the proposed Hug-DRL allows real-time human guidance actions to be fused into the agent's training loop, further improving the efficiency and performance of DRL. The developed method is validated by human-in-the-loop experiments with 40 subjects and compared with other state-of-the-art learning approaches. The results suggest that the proposed method can effectively enhance the training efficiency and performance of the DRL algorithm under human guidance without imposing specific requirements on participants’ expertise or experience.
KW - Autonomous driving
KW - Deep reinforcement learning
KW - Human guidance
KW - Human-in-the-loop AI
UR - https://www.scopus.com/pages/publications/85146715634
U2 - 10.1016/j.eng.2022.05.017
DO - 10.1016/j.eng.2022.05.017
M3 - Article
AN - SCOPUS:85146715634
SN - 2095-8099
VL - 21
SP - 75
EP - 91
JO - Engineering
JF - Engineering
ER -