Abstract
In this paper, the distributed coordination control of path tracking and nash equilibrium seeking of networked automated ground vehicles systems with unknown dynamics is investigated under the framework of graphical games. Different from existing works assuming that the vehicle dynamics are known, each vehicle with completely unknown system dynamics is considered in this paper. To solve this problem, a learning-based data-driven technique is proposed to identify and reconstruct the unknown system matrices. Then, based on the identified system matrices, an offline reinforcement learning (RL) algorithm is proposed to derive both the optimal control policies and the policy iteration solution for graphical games, as well as its corresponding convergence is analyzed. Besides, an online learning algorithm only relying on the online information of states and inputs in an online way is developed to solve the optimal path tracking control problem. As a result, the requirement of relying on the vehicle’s dynamics in the traditional tracking control protocols is completely relaxed by our proposed method. The optimal distributed control policies found by the proposed RL algorithm satisfies the global Nash equilibrium and synchronizes all tracked vehicles to the pinning vehicle. Numerical simulation results are provided to show the effectiveness of the theoretical analysis.
Original language | English |
---|---|
Pages (from-to) | 1-11 |
Number of pages | 11 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
DOIs | |
Publication status | Accepted/In press - 2024 |
Keywords
- Automated ground vehicles (AGVs)
- Games
- Heuristic algorithms
- Mathematical models
- Numerical models
- Optimal control
- System dynamics
- Vehicle dynamics
- data-driven control
- distributed cooperative control
- path following control
- reinforcement learning (RL)