TY - JOUR
T1 - Model-free learning-based distributed cooperative tracking control of human-in-the-loop multi-agent systems
AU - Mei, Di
AU - Sun, Jian
AU - Xu, Yong
AU - Dou, Lihua
N1 - Publisher Copyright:
© 2024 John Wiley & Sons Ltd.
PY - 2024
Y1 - 2024
N2 - This article studies the model-free learning-based distributed cooperative tracking control of human-in-the-loop multi-agent systems in the presence of an active leader. The core role of human-in-the-loop is to use the ground station to send control commands to the non-zero control input of the leader, and then directly or indirectly control a group of agents to complete complex tasks. Meanwhile, three essential demands including the completely unknown system model, the control objective obtained optimally, as well as no initial admissible control strategy requirement, are satisfied simultaneously. It is worth emphasizing that the relevant results only satisfy one or two demands at most, which are essentially not applicable to this problem. In this article, a model-based human-in-the-loop learning algorithm is first presented to achieve the optimal tracking control, as well as the convergence of the proposed learning algorithm is proved. Then, a bias-based data-driven learning algorithm is proposed, which provides the potential opportunities to overcome the difficulties caused by the above-mentioned three demands. Finally, the validity of theoretical results is testified by a numerical example.
AB - This article studies the model-free learning-based distributed cooperative tracking control of human-in-the-loop multi-agent systems in the presence of an active leader. The core role of human-in-the-loop is to use the ground station to send control commands to the non-zero control input of the leader, and then directly or indirectly control a group of agents to complete complex tasks. Meanwhile, three essential demands including the completely unknown system model, the control objective obtained optimally, as well as no initial admissible control strategy requirement, are satisfied simultaneously. It is worth emphasizing that the relevant results only satisfy one or two demands at most, which are essentially not applicable to this problem. In this article, a model-based human-in-the-loop learning algorithm is first presented to achieve the optimal tracking control, as well as the convergence of the proposed learning algorithm is proved. Then, a bias-based data-driven learning algorithm is proposed, which provides the potential opportunities to overcome the difficulties caused by the above-mentioned three demands. Finally, the validity of theoretical results is testified by a numerical example.
KW - distributed tracking control
KW - multi-agent systems
KW - reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85188445022&partnerID=8YFLogxK
U2 - 10.1002/rnc.7333
DO - 10.1002/rnc.7333
M3 - Article
AN - SCOPUS:85188445022
SN - 1049-8923
JO - International Journal of Robust and Nonlinear Control
JF - International Journal of Robust and Nonlinear Control
ER -