Abstract
This paper considers a distributed stochastic non-convex optimization problem, where the nodes in a network cooperatively minimize a sum of <inline-formula><tex-math notation="LaTeX">$L$</tex-math></inline-formula>-smooth local cost functions with sparse gradients. By adaptively adjusting the stepsizes according to the historical (possibly sparse) gradients, a distributed adaptive gradient algorithm is proposed, in which a gradient tracking estimator is used to handle the heterogeneity between different local cost functions. We establish an upper bound on the optimality gap, which indicates that our proposed algorithm can reach a first-order stationary solution dependent on the upper bound on the variance of the stochastic gradients. Finally, numerical examples are presented to illustrate the effectiveness of the algorithm.
Original language | English |
---|---|
Pages (from-to) | 1-8 |
Number of pages | 8 |
Journal | IEEE Transactions on Automatic Control |
DOIs | |
Publication status | Accepted/In press - 2024 |
Keywords
- Convex functions
- Cost function
- Distributed non-convex optimization
- Radio frequency
- Robots
- Sparse matrices
- Upper bound
- Vectors
- adaptive gradient algorithm
- gradient tracking
- stochastic gradient