Distributed Proximal Gradient Algorithm for Nonconvex Optimization over Time-Varying Networks

Xia Jiang, Xianlin Zeng, Jian Sun*, Jie Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

This article studies the distributed nonconvex optimization problem with nonsmooth regularization, which has wide applications in decentralized learning, estimation, and control. The objective function is the sum of local objective functions, which consist of differentiable (possibly nonconvex) cost functions and nonsmooth convex functions. This article presents a distributed proximal gradient algorithm for the nonsmooth nonconvex optimization problem. Over time-varying multiagent networks, the proposed algorithm updates local variable estimates with a constant step-size at the cost of multiple consensus steps, where the number of communication rounds increases over time. We prove that the generated local variables achieve consensus and converge to the set of critical points. Finally, we verify the efficiency of the proposed algorithm by numerical simulations.

Original languageEnglish
Pages (from-to)1005-1017
Number of pages13
JournalIEEE Transactions on Control of Network Systems
Volume10
Issue number2
DOIs
Publication statusPublished - 1 Jun 2023

Keywords

  • Distributed proximal gradient algorithm
  • multiagent systems
  • nonconvex optimization
  • time-varying topology

Fingerprint

Dive into the research topics of 'Distributed Proximal Gradient Algorithm for Nonconvex Optimization over Time-Varying Networks'. Together they form a unique fingerprint.

Cite this