Distributed Stochastic Proximal Algorithm With Random Reshuffling for Nonsmooth Finite-Sum Optimization

Xia Jiang, Xianlin Zeng, Jian Sun*, Jie Chen, Lihua Xie

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

The nonsmooth finite-sum minimization is a fundamental problem in machine learning. This article develops a distributed stochastic proximal-gradient algorithm with random reshuffling to solve the finite-sum minimization over time-varying multiagent networks. The objective function is a sum of differentiable convex functions and nonsmooth regularization. Each agent in the network updates local variables by local information exchange and cooperates to seek an optimal solution. We prove that local variable estimates generated by the proposed algorithm achieve consensus and are attracted to a neighborhood of the optimal solution with an O((1/T )+(1/vT)) convergence rate, where T is the total number of iterations. Finally, some comparative simulations are provided to verify the convergence performance of the proposed algorithm.

Original languageEnglish
Pages (from-to)4082-4096
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number3
DOIs
Publication statusPublished - 1 Mar 2024

Keywords

  • Distributed optimization
  • proximal operator
  • random reshuffling (RR)
  • stochastic algorithm
  • time-varying graphs

Fingerprint

Dive into the research topics of 'Distributed Stochastic Proximal Algorithm With Random Reshuffling for Nonsmooth Finite-Sum Optimization'. Together they form a unique fingerprint.

Cite this