Optimal Tracking Control of Heterogeneous MASs Using Event-Driven Adaptive Observer and Reinforcement Learning

Yong Xu, Jian Sun*, Ya Jun Pan, Zheng Guang Wu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

15 引用 (Scopus)

摘要

This article considers the output tracking control problem of nonidentical linear multiagent systems (MASs) using a model-free reinforcement learning (RL) algorithm, where partial followers have no prior knowledge of the leader's information. To lower the communication and computing burden among agents, an event-driven adaptive distributed observer is proposed to predict the leader's system matrix and state, which consists of the estimated value of relative states governed by an edge-based predictor. Meanwhile, the integral input-based triggering condition is exploited to decide whether to transmit its private control input to its neighbors. Then, an RL-based state feedback controller for each agent is developed to solve the output tracking control problem, which is further converted into the optimal control problem by introducing a discounted performance function. Inhomogeneous algebraic Riccati equations (AREs) are derived to obtain the optimal solution of AREs. An off-policy RL algorithm is used to learn the solution of inhomogeneous AREs online without requiring any knowledge of the system dynamics. Rigorous analysis shows that under the proposed event-driven adaptive observer mechanism and RL algorithm, all followers are able to synchronize the leader's output asymptotically. Finally, a numerical simulation is demonstrated to verify the proposed approach in theory.

源语言英语
页(从-至)5577-5587
页数11
期刊IEEE Transactions on Neural Networks and Learning Systems
35
4
DOI
出版状态已出版 - 1 4月 2024

指纹

探究 'Optimal Tracking Control of Heterogeneous MASs Using Event-Driven Adaptive Observer and Reinforcement Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此