TY - JOUR
T1 - Event-triggered distributed zero-sum differential game for nonlinear multi-agent systems using adaptive dynamic programming
AU - Sun, JingLiang
AU - Long, Teng
N1 - Publisher Copyright:
© 2020 ISA
PY - 2021/4
Y1 - 2021/4
N2 - In this paper, to reduce the computational and communication burden, the event-triggered distributed zero-sum differential game problem for multi-agent systems is investigated. Firstly, based on the Minimax principle, an adaptive event-triggered distributed iterative differential game strategy is derived with an adaptive triggering condition for updating the control scheme aperiodically. Then, to implement this proposed strategy, the solution of coupled Hamilton–Jacobi–Isaacs (HJI) equation is approximated by constructing the critic neural network (NN). In order to further relax the restrictive persistent of excitation (PE) condition, a novel PE-free updating law is designed by using the experience replay method. Then, the distributed event-triggered nonlinear system is expressed as an impulsive dynamical system. After analyzing the stability, the developed strategy ensures the uniformly ultimately bounded (UUB) of all the closed-loop signals. Moreover, the minimal intersample time is proved to be lower bounded, which avoids the infamous Zeno behavior. Finally, the simulation results show that the number of controller update is reduced obviously, which saves the computational and communication resources.
AB - In this paper, to reduce the computational and communication burden, the event-triggered distributed zero-sum differential game problem for multi-agent systems is investigated. Firstly, based on the Minimax principle, an adaptive event-triggered distributed iterative differential game strategy is derived with an adaptive triggering condition for updating the control scheme aperiodically. Then, to implement this proposed strategy, the solution of coupled Hamilton–Jacobi–Isaacs (HJI) equation is approximated by constructing the critic neural network (NN). In order to further relax the restrictive persistent of excitation (PE) condition, a novel PE-free updating law is designed by using the experience replay method. Then, the distributed event-triggered nonlinear system is expressed as an impulsive dynamical system. After analyzing the stability, the developed strategy ensures the uniformly ultimately bounded (UUB) of all the closed-loop signals. Moreover, the minimal intersample time is proved to be lower bounded, which avoids the infamous Zeno behavior. Finally, the simulation results show that the number of controller update is reduced obviously, which saves the computational and communication resources.
KW - Adaptive dynamic programming
KW - Distributed differential game
KW - Event-triggered control
KW - Multi-agent systems
KW - Neural network
UR - http://www.scopus.com/inward/record.url?scp=85094620287&partnerID=8YFLogxK
U2 - 10.1016/j.isatra.2020.10.043
DO - 10.1016/j.isatra.2020.10.043
M3 - Article
C2 - 33127079
AN - SCOPUS:85094620287
SN - 0019-0578
VL - 110
SP - 39
EP - 52
JO - ISA Transactions
JF - ISA Transactions
ER -