TY - JOUR
T1 - DISTRIBUTED ACCELERATED NASH EQUILIBRIUM LEARNING FOR TWO-SUBNETWORK ZERO-SUM GAME WITH BILINEAR COUPLING
AU - Zeng, Xianlin
AU - Dou, Lihua
AU - Cui, Jinqiang
N1 - Publisher Copyright:
© 2023 Institute of Information Theory and Automation of The Czech Academy of Sciences. All rights reserved.
PY - 2023
Y1 - 2023
N2 - This paper proposes a distributed accelerated first-order continuous-time algorithm for O(1=t2) convergence to Nash equilibria in a class of two-subnetwork zero-sum games with bi- linear couplings. First-order methods, which only use subgradients of functions, are frequently used in distributed/parallel algorithms for solving large-scale and big-data problems due to their simple structures. However, in the worst cases, first-order methods for two-subnetwork zero-sum games often have an asymptotic or O(1=t) convergence. In contrast to existing time-invariant first-order methods, this paper designs a distributed accelerated algorithm by combining saddle-point dynamics and time-varying derivative feedback techniques. If the parameters of the proposed algorithm are suitable, the algorithm owns O(1=t2) convergence in terms of the duality gap function without any uniform or strong convexity requirement. Numerical simulations show the efficacy of the algorithm.
AB - This paper proposes a distributed accelerated first-order continuous-time algorithm for O(1=t2) convergence to Nash equilibria in a class of two-subnetwork zero-sum games with bi- linear couplings. First-order methods, which only use subgradients of functions, are frequently used in distributed/parallel algorithms for solving large-scale and big-data problems due to their simple structures. However, in the worst cases, first-order methods for two-subnetwork zero-sum games often have an asymptotic or O(1=t) convergence. In contrast to existing time-invariant first-order methods, this paper designs a distributed accelerated algorithm by combining saddle-point dynamics and time-varying derivative feedback techniques. If the parameters of the proposed algorithm are suitable, the algorithm owns O(1=t2) convergence in terms of the duality gap function without any uniform or strong convexity requirement. Numerical simulations show the efficacy of the algorithm.
KW - two-subnetwork zero-sum game, distributed accelerated algorithm, Nash equilibrium learning, nonsmooth function, continuous-time algorithm
UR - http://www.scopus.com/inward/record.url?scp=85165138259&partnerID=8YFLogxK
U2 - 10.14736/kyb-2023-3-0418
DO - 10.14736/kyb-2023-3-0418
M3 - Article
AN - SCOPUS:85165138259
SN - 0023-5954
VL - 59
SP - 418
EP - 436
JO - Kybernetika
JF - Kybernetika
IS - 3
ER -