Distributed Adaptive Gradient Algorithm with Gradient Tracking for Stochastic Non-Convex Optimization

Dongyu Han, Kun Liu, Yeming Lin, Yuanqing Xia

Research output: Contribution to journalArticlepeer-review

Abstract

This paper considers a distributed stochastic non-convex optimization problem, where the nodes in a network cooperatively minimize a sum of <inline-formula><tex-math notation="LaTeX">$L$</tex-math></inline-formula>-smooth local cost functions with sparse gradients. By adaptively adjusting the stepsizes according to the historical (possibly sparse) gradients, a distributed adaptive gradient algorithm is proposed, in which a gradient tracking estimator is used to handle the heterogeneity between different local cost functions. We establish an upper bound on the optimality gap, which indicates that our proposed algorithm can reach a first-order stationary solution dependent on the upper bound on the variance of the stochastic gradients. Finally, numerical examples are presented to illustrate the effectiveness of the algorithm.

Original languageEnglish
Pages (from-to)1-8
Number of pages8
JournalIEEE Transactions on Automatic Control
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Convex functions
  • Cost function
  • Distributed non-convex optimization
  • Radio frequency
  • Robots
  • Sparse matrices
  • Upper bound
  • Vectors
  • adaptive gradient algorithm
  • gradient tracking
  • stochastic gradient

Fingerprint

Dive into the research topics of 'Distributed Adaptive Gradient Algorithm with Gradient Tracking for Stochastic Non-Convex Optimization'. Together they form a unique fingerprint.

Cite this