Abstract
This paper considers a class of consensus optimization problems over a time-varying communication network wherein each agent can only interact with its neighbours. The target is to minimize the summation of all local and possibly non-smooth objectives in the presence of different constraint sets per agent. To achieve this goal, we propose a novel distributed heavy-ball algorithm that combines the subgradient tracking technique with a momentum term related to history information. This algorithm promotes the distributed application of existing centralized accelerated momentum methods, especially for constrained non-smooth problems. Under certain assumptions and conditions on the step-size and momentum coefficient, the convergence and optimality of the proposed algorithm can be guaranteed through a rigorous theoretical analysis, and a convergence rate of <inline-formula><tex-math notation="LaTeX">$\mathcal {O}(\rm {ln}k/ \sqrt{k})$</tex-math></inline-formula> in objective value is also established. Simulations on an <inline-formula><tex-math notation="LaTeX">$\ell _{1}$</tex-math></inline-formula>-regularized logistic-regression problem show that the proposed algorithm can achieve faster convergence than existing related distributed algorithms, while a case study involving a building energy management problem further demonstrates its efficacy.
Original language | English |
---|---|
Pages (from-to) | 1-16 |
Number of pages | 16 |
Journal | IEEE Transactions on Automatic Control |
DOIs | |
Publication status | Accepted/In press - 2024 |
Keywords
- Convergence
- Distributed algorithms
- Distributed optimization
- Heuristic algorithms
- Linear programming
- Optimization
- Reviews
- Vectors
- heavy-ball momentum
- multi-agent networks
- sub-gradient averaging consensus