Abstract
This article proposes a distributed stochastic projection-free algorithm for large-scale constrained finite-sum optimization whose constraint set is complicated such that the projection onto the constraint set can be expensive. The global cost function is allocated to multiple agents, each of which computes its local stochastic gradients and communicates with its neighbors to solve the global problem. Stochastic gradient methods enable low computational complexity, while they are hard and slow to converge due to the variance caused by random sampling. To construct a convergent distributed stochastic projection-free algorithm, this article incorporates variance reduction and gradient tracking techniques in the Frank–Wolfe (FW) update. We develop a novel sampling rule for the variance reduction technique to reduce the variance introduced by stochastic gradients. Complete and rigorous proofs show that the proposed distributed projection-free algorithm converges with a sublinear convergence rate and enjoys superior complexity guarantees for both convex and nonconvex objective functions. By comparative simulations, we demonstrate the convergence and computational efficiency of the proposed algorithm.
Original language | English |
---|---|
Pages (from-to) | 2479-2494 |
Number of pages | 16 |
Journal | IEEE Transactions on Automatic Control |
Volume | 70 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2025 |
Keywords
- Distributed solver
- finite-sum optimization
- Frank–Wolfe (FW) algorithm
- stochastic gradient
- variance reduction