Adaptive Fixed-Time Optimal Formation Control for Uncertain Nonlinear Multiagent Systems Using Reinforcement Learning

Ping Wang, Chengpu Yu*, Maolong Lv, Jinde Cao

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

This article explores the application of reinforcement learning (RL) strategy to achieve an adaptive fixed-time (FxT) optimized formation control of uncertain nonlinear multiagent systems. The primary obstacle in this process is the difficulty in attaining FxT stability under the actor-critic setting due to intermediate estimation errors and generic system uncertainties. To overcome these challenges, the RL control algorithm is implemented using an identifier-actor-critic structure, where the identifier is utilized to address the system uncertainty involving unknown nonlinear dynamics and external disturbances. Furthermore, a novel quadratic function is introduced to establish the boundedness of the estimation error of the actor-critic learning law, which plays a pivotal role in the FxT stability analysis. Finally, a unified FxT optimized formation control strategy is developed, which guarantees the realization of the predetermined formation at a fixed time while optimizing the given performance measure. The effectiveness of the proposed control algorithm is verified through simulation of a team of marine surface vessels.

源语言英语
页(从-至)1729-1743
页数15
期刊IEEE Transactions on Network Science and Engineering
11
2
DOI
出版状态已出版 - 1 3月 2024

指纹

探究 'Adaptive Fixed-Time Optimal Formation Control for Uncertain Nonlinear Multiagent Systems Using Reinforcement Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此