Stabilized distributed online mirror descent for multi-agent optimization

Ping Wu, Heyan Huang*, Haolin Lu, Zhengyang Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In the domain of multi-agent networks, distributed online mirror descent (DOMD) and distributed online dual averaging (DODA) play pivotal roles as fundamental algorithms for distributed online convex optimization. However, in contrast to DODA, DOMD fails when employed with a dynamic learning rate sequence. To bridge this gap, we introduce two novel variants of DOMD by incorporating a distributed stabilization step in primal space and dual space, respectively. We demonstrate that our stabilized DOMD algorithms achieve a sublinear bound with a sequence of dynamic learning rates. We further evolve our dual-stabilized DOMD by integrating a lazy communicated subgradient descent step, resulting in a re-indexed DODA. This establishes a connection between the two types of distributed algorithms, which enhances our understandings of distributed optimization. Moreover, we extend our proposed algorithms to handle the case of exponentiated gradient, where the iterate is constrained within a simplex probability. Finally, we conduct extensive numerical simulations to validate our theoretical analysis.

Original languageEnglish
Article number112582
JournalKnowledge-Based Systems
Volume304
DOIs
Publication statusPublished - 25 Nov 2024

Keywords

  • Distributed convex optimization
  • Dynamic learning rate
  • Multi-agent network
  • Online dual averaging
  • Online mirror descent
  • Stabilization step

Fingerprint

Dive into the research topics of 'Stabilized distributed online mirror descent for multi-agent optimization'. Together they form a unique fingerprint.

Cite this