TY - JOUR
T1 - Optimal Tracking Control of Heterogeneous MASs Using Event-Driven Adaptive Observer and Reinforcement Learning
AU - Xu, Yong
AU - Sun, Jian
AU - Pan, Ya Jun
AU - Wu, Zheng Guang
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2024/4/1
Y1 - 2024/4/1
N2 - This article considers the output tracking control problem of nonidentical linear multiagent systems (MASs) using a model-free reinforcement learning (RL) algorithm, where partial followers have no prior knowledge of the leader's information. To lower the communication and computing burden among agents, an event-driven adaptive distributed observer is proposed to predict the leader's system matrix and state, which consists of the estimated value of relative states governed by an edge-based predictor. Meanwhile, the integral input-based triggering condition is exploited to decide whether to transmit its private control input to its neighbors. Then, an RL-based state feedback controller for each agent is developed to solve the output tracking control problem, which is further converted into the optimal control problem by introducing a discounted performance function. Inhomogeneous algebraic Riccati equations (AREs) are derived to obtain the optimal solution of AREs. An off-policy RL algorithm is used to learn the solution of inhomogeneous AREs online without requiring any knowledge of the system dynamics. Rigorous analysis shows that under the proposed event-driven adaptive observer mechanism and RL algorithm, all followers are able to synchronize the leader's output asymptotically. Finally, a numerical simulation is demonstrated to verify the proposed approach in theory.
AB - This article considers the output tracking control problem of nonidentical linear multiagent systems (MASs) using a model-free reinforcement learning (RL) algorithm, where partial followers have no prior knowledge of the leader's information. To lower the communication and computing burden among agents, an event-driven adaptive distributed observer is proposed to predict the leader's system matrix and state, which consists of the estimated value of relative states governed by an edge-based predictor. Meanwhile, the integral input-based triggering condition is exploited to decide whether to transmit its private control input to its neighbors. Then, an RL-based state feedback controller for each agent is developed to solve the output tracking control problem, which is further converted into the optimal control problem by introducing a discounted performance function. Inhomogeneous algebraic Riccati equations (AREs) are derived to obtain the optimal solution of AREs. An off-policy RL algorithm is used to learn the solution of inhomogeneous AREs online without requiring any knowledge of the system dynamics. Rigorous analysis shows that under the proposed event-driven adaptive observer mechanism and RL algorithm, all followers are able to synchronize the leader's output asymptotically. Finally, a numerical simulation is demonstrated to verify the proposed approach in theory.
KW - Adaptive observer
KW - event-triggered control
KW - multiagent systems (MASs)
KW - reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85139837806&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2022.3208237
DO - 10.1109/TNNLS.2022.3208237
M3 - Article
C2 - 36191114
AN - SCOPUS:85139837806
SN - 2162-237X
VL - 35
SP - 5577
EP - 5587
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 4
ER -