TY - JOUR
T1 - Cooperative Path Following Control in Autonomous Vehicles Graphical Games
T2 - A Data-Based Off-Policy Learning Approach
AU - Xu, Yong
AU - Wu, Zheng Guang
AU - Pan, Ya Jun
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2024
Y1 - 2024
N2 - In this paper, the distributed coordination control of path tracking and nash equilibrium seeking of networked automated ground vehicles systems with unknown dynamics is investigated under the framework of graphical games. Different from existing works assuming that the vehicle dynamics are known, each vehicle with completely unknown system dynamics is considered in this paper. To solve this problem, a learning-based data-driven technique is proposed to identify and reconstruct the unknown system matrices. Then, based on the identified system matrices, an offline reinforcement learning (RL) algorithm is proposed to derive both the optimal control policies and the policy iteration solution for graphical games, as well as its corresponding convergence is analyzed. Besides, an online learning algorithm only relying on the online information of states and inputs in an online way is developed to solve the optimal path tracking control problem. As a result, the requirement of relying on the vehicle's dynamics in the traditional tracking control protocols is completely relaxed by our proposed method. The optimal distributed control policies found by the proposed RL algorithm satisfies the global Nash equilibrium and synchronizes all tracked vehicles to the pinning vehicle. Numerical simulation results are provided to show the effectiveness of the theoretical analysis.
AB - In this paper, the distributed coordination control of path tracking and nash equilibrium seeking of networked automated ground vehicles systems with unknown dynamics is investigated under the framework of graphical games. Different from existing works assuming that the vehicle dynamics are known, each vehicle with completely unknown system dynamics is considered in this paper. To solve this problem, a learning-based data-driven technique is proposed to identify and reconstruct the unknown system matrices. Then, based on the identified system matrices, an offline reinforcement learning (RL) algorithm is proposed to derive both the optimal control policies and the policy iteration solution for graphical games, as well as its corresponding convergence is analyzed. Besides, an online learning algorithm only relying on the online information of states and inputs in an online way is developed to solve the optimal path tracking control problem. As a result, the requirement of relying on the vehicle's dynamics in the traditional tracking control protocols is completely relaxed by our proposed method. The optimal distributed control policies found by the proposed RL algorithm satisfies the global Nash equilibrium and synchronizes all tracked vehicles to the pinning vehicle. Numerical simulation results are provided to show the effectiveness of the theoretical analysis.
KW - Automated ground vehicles (AGVs)
KW - data-driven control
KW - distributed cooperative control
KW - path following control
KW - reinforcement learning (RL)
UR - https://www.scopus.com/pages/publications/85184325920
U2 - 10.1109/TITS.2024.3355411
DO - 10.1109/TITS.2024.3355411
M3 - Article
AN - SCOPUS:85184325920
SN - 1524-9050
VL - 25
SP - 9364
EP - 9374
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 8
ER -