Event-Triggered ADP for Nonzero-Sum Games of Unknown Nonlinear Systems

Qingtao Zhao, Jian Sun*, Gang Wang, Jie Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

33 Citations (Scopus)

Abstract

For nonzero-sum (NZS) games of nonlinear systems, reinforcement learning (RL) or adaptive dynamic programming (ADP) has shown its capability of approximating the desired index performance and the optimal input policy iteratively. In this article, an event-triggered ADP is proposed for NZS games of continuous-time nonlinear systems with completely unknown system dynamics. To achieve the Nash equilibrium solution approximately, the critic neural networks and actor neural networks are utilized to estimate the value functions and the control policies, respectively. Compared with the traditional time-triggered mechanism, the proposed algorithm updates the neural network weights as well as the inputs of players only when a state-based event-triggered condition is violated. It is shown that the system stability and the weights' convergence are still guaranteed under mild assumptions, while occupation of communication and computation resources is considerably reduced. Meanwhile, the infamous Zeno behavior is excluded by proving the existence of a minimum inter-event time (MIET) to ensure the feasibility of the closed-loop event-triggered continuous-time system. Finally, a numerical example is simulated to illustrate the effectiveness of the proposed approach.

Original languageEnglish
Pages (from-to)1905-1913
Number of pages9
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume33
Issue number5
DOIs
Publication statusPublished - 1 May 2022

Keywords

  • Adaptive dynamic programming (ADP)
  • event-triggered
  • nonzero-sum (NZS) games
  • reinforcement learning (RL)

Fingerprint

Dive into the research topics of 'Event-Triggered ADP for Nonzero-Sum Games of Unknown Nonlinear Systems'. Together they form a unique fingerprint.

Cite this