TY - JOUR
T1 - Stabilized distributed online mirror descent for multi-agent optimization
AU - Wu, Ping
AU - Huang, Heyan
AU - Lu, Haolin
AU - Liu, Zhengyang
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/11/25
Y1 - 2024/11/25
N2 - In the domain of multi-agent networks, distributed online mirror descent (DOMD) and distributed online dual averaging (DODA) play pivotal roles as fundamental algorithms for distributed online convex optimization. However, in contrast to DODA, DOMD fails when employed with a dynamic learning rate sequence. To bridge this gap, we introduce two novel variants of DOMD by incorporating a distributed stabilization step in primal space and dual space, respectively. We demonstrate that our stabilized DOMD algorithms achieve a sublinear bound with a sequence of dynamic learning rates. We further evolve our dual-stabilized DOMD by integrating a lazy communicated subgradient descent step, resulting in a re-indexed DODA. This establishes a connection between the two types of distributed algorithms, which enhances our understandings of distributed optimization. Moreover, we extend our proposed algorithms to handle the case of exponentiated gradient, where the iterate is constrained within a simplex probability. Finally, we conduct extensive numerical simulations to validate our theoretical analysis.
AB - In the domain of multi-agent networks, distributed online mirror descent (DOMD) and distributed online dual averaging (DODA) play pivotal roles as fundamental algorithms for distributed online convex optimization. However, in contrast to DODA, DOMD fails when employed with a dynamic learning rate sequence. To bridge this gap, we introduce two novel variants of DOMD by incorporating a distributed stabilization step in primal space and dual space, respectively. We demonstrate that our stabilized DOMD algorithms achieve a sublinear bound with a sequence of dynamic learning rates. We further evolve our dual-stabilized DOMD by integrating a lazy communicated subgradient descent step, resulting in a re-indexed DODA. This establishes a connection between the two types of distributed algorithms, which enhances our understandings of distributed optimization. Moreover, we extend our proposed algorithms to handle the case of exponentiated gradient, where the iterate is constrained within a simplex probability. Finally, we conduct extensive numerical simulations to validate our theoretical analysis.
KW - Distributed convex optimization
KW - Dynamic learning rate
KW - Multi-agent network
KW - Online dual averaging
KW - Online mirror descent
KW - Stabilization step
UR - http://www.scopus.com/inward/record.url?scp=85206179722&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2024.112582
DO - 10.1016/j.knosys.2024.112582
M3 - Article
AN - SCOPUS:85206179722
SN - 0950-7051
VL - 304
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 112582
ER -