Abstract
Inspired by Nash game theory, a multiplayer mixed-zero-sum (MZS) nonlinear game considering both two situations [zero-sum and nonzero-sum (NZS) Nash games] is proposed in this paper. A synchronous reinforcement learning (RL) scheme based on the identifier-critic structure is developed to learn the Nash equilibrium solution of the proposed MZS game. First, the MZS game formulation is presented, where the performance indexes for players 1 to ${N}$ - 1 and ${N}$ NZS Nash game are presented, and another performance index for players ${N}$ and ${N}$ + 1 zero-sum game is presented, such that player ${N}$ cooperates with players 1 to ${N}$ - 1, while competes with player ${N}$ + 1, which leads to a Nash equilibrium of all players. A single-layer neural network (NN) is then used to approximate the unknown dynamics of the nonlinear game system. Finally, an RL scheme based on NNs is developed to learn the optimal performance indexes, which can be used to produce the optimal control policy of every player such that Nash equilibrium can be obtained. Thus, the widely used actor NN in RL literature is not needed. To this end, a recently proposed adaptive law is used to estimate the unknown identifier coefficient vectors, and an improved adaptive law with the error performance index is further developed to update the critic coefficient vectors. Both linear and nonlinear simulations are presented to demonstrate the existence of Nash equilibrium for MZS game and performance of the proposed algorithm.
Original language | English |
---|---|
Article number | 8438886 |
Pages (from-to) | 2739-2750 |
Number of pages | 12 |
Journal | IEEE Transactions on Systems, Man, and Cybernetics: Systems |
Volume | 49 |
Issue number | 12 |
DOIs | |
Publication status | Published - Dec 2019 |
Keywords
- Approximate dynamic programming (ADP)
- Nash games
- neural networks (NNs)
- reinforcement learning (RL)
- system identification