Graphical Minimax Game and Off-Policy Reinforcement Learning for Heterogeneous MASs with Spanning Tree Condition

Wei Dong, Jianan Wang, Chunyan Wang*, Zhenqiang Qi, Zhengtao Ding

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

In this paper, the optimal consensus control problem is investigated for heterogeneous linear multi-agent systems (MASs) with spanning tree condition based on game theory and reinforcement learning. First, the graphical minimax game algebraic Riccati equation (ARE) is derived by converting the consensus problem into a zero-sum game problem between each agent and its neighbors. The asymptotic stability and minimax validation of the closed-loop systems are proved theoretically. Then, a data-driven off-policy reinforcement learning algorithm is proposed to online learn the optimal control policy without the information of the system dynamics. A certain rank condition is established to guarantee the convergence of the proposed algorithm to the unique solution of the ARE. Finally, the effectiveness of the proposed method is demonstrated through a numerical simulation.

Original languageEnglish
Article number2150011
JournalGuidance, Navigation and Control
Volume1
Issue number3
DOIs
Publication statusPublished - 1 Sept 2021

Keywords

  • Consensus control
  • MASs
  • data-driven control
  • minimax game
  • policy iteration
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Graphical Minimax Game and Off-Policy Reinforcement Learning for Heterogeneous MASs with Spanning Tree Condition'. Together they form a unique fingerprint.

Cite this