Stabilized distributed online mirror descent for multi-agent optimization

Ping Wu, Heyan Huang*, Haolin Lu, Zhengyang Liu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

In the domain of multi-agent networks, distributed online mirror descent (DOMD) and distributed online dual averaging (DODA) play pivotal roles as fundamental algorithms for distributed online convex optimization. However, in contrast to DODA, DOMD fails when employed with a dynamic learning rate sequence. To bridge this gap, we introduce two novel variants of DOMD by incorporating a distributed stabilization step in primal space and dual space, respectively. We demonstrate that our stabilized DOMD algorithms achieve a sublinear bound with a sequence of dynamic learning rates. We further evolve our dual-stabilized DOMD by integrating a lazy communicated subgradient descent step, resulting in a re-indexed DODA. This establishes a connection between the two types of distributed algorithms, which enhances our understandings of distributed optimization. Moreover, we extend our proposed algorithms to handle the case of exponentiated gradient, where the iterate is constrained within a simplex probability. Finally, we conduct extensive numerical simulations to validate our theoretical analysis.

源语言英语
文章编号112582
期刊Knowledge-Based Systems
304
DOI
出版状态已出版 - 25 11月 2024

指纹

探究 'Stabilized distributed online mirror descent for multi-agent optimization' 的科研主题。它们共同构成独一无二的指纹。

引用此