Distributed Proximal Gradient Algorithm for Nonconvex Optimization over Time-Varying Networks

Xia Jiang, Xianlin Zeng, Jian Sun*, Jie Chen

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

6 引用 (Scopus)

摘要

This article studies the distributed nonconvex optimization problem with nonsmooth regularization, which has wide applications in decentralized learning, estimation, and control. The objective function is the sum of local objective functions, which consist of differentiable (possibly nonconvex) cost functions and nonsmooth convex functions. This article presents a distributed proximal gradient algorithm for the nonsmooth nonconvex optimization problem. Over time-varying multiagent networks, the proposed algorithm updates local variable estimates with a constant step-size at the cost of multiple consensus steps, where the number of communication rounds increases over time. We prove that the generated local variables achieve consensus and converge to the set of critical points. Finally, we verify the efficiency of the proposed algorithm by numerical simulations.

源语言英语
页(从-至)1005-1017
页数13
期刊IEEE Transactions on Control of Network Systems
10
2
DOI
出版状态已出版 - 1 6月 2023

指纹

探究 'Distributed Proximal Gradient Algorithm for Nonconvex Optimization over Time-Varying Networks' 的科研主题。它们共同构成独一无二的指纹。

引用此