Finding the Equilibrium for Continuous Constrained Markov Games under the Average Criteria

Xiaofeng Jiang, Shuangwu Chen, Jian Yang*, Han Hu, Zhenliang Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

For Markov game with cost constraints and continuous actions, the local constraint of single-decision maker is the interacted result of joint actions taken by the other decision makers, and is usually eliminated by imposing penalties on the undesired states and policies, which may suffer from the failure of penalties as the game policy changes and the nonexistence of the mixed policies. In this article, a framework of the actor-critic deep neural network is utilized to solve this problem. The actor network establishes the continuous pure policy to replace the mixed policy, and the critic network converts the global interacted results into a local performance potential. The local search for a constrained equilibrium average objective is converted into an unconstrained minimax optimization. Based on the equivalent conversion, the optimality function of the local action is given to evaluate the influence of the single decision maker's action on the global system. The proposed algorithm simultaneously iterates the local constraint multiplier and policy along opposite directions, and a typical congestion control numerical result in the emerging Internet of Things shows the efficiency.

Original languageEnglish
Article number8972586
Pages (from-to)5399-5406
Number of pages8
JournalIEEE Transactions on Automatic Control
Volume65
Issue number12
DOIs
Publication statusPublished - Dec 2020

Keywords

  • Constrained Markov game (MG)
  • continuous state and action
  • expected average criteria
  • optimality equation
  • performance potential

Fingerprint

Dive into the research topics of 'Finding the Equilibrium for Continuous Constrained Markov Games under the Average Criteria'. Together they form a unique fingerprint.

Cite this