Abstract
This technical note studies a class of distributed nonsmooth convex consensus optimization problems. The cost function is a summation of local cost functions which are convex but nonsmooth. Each of the local cost functions consists of a twice differentiable (smooth) convex function and two lower semi-continuous (nonsmooth) convex functions. We call these problems as single-smooth plus double-nonsmooth (SSDN) problems. Under mild conditions, we propose a distributed double proximal primal-dual optimization algorithm. Double proximal splitting is designed to deal with the difficulty caused by the unproximable property of the summation of those two nonsmooth functions. Besides, it can also guarantee that the proposed algorithm is locally Lipschitz continuous. An auxiliary variable in the double proximal splitting is introduced to estimate the subgradient of the second nonsmooth function. Theoretically, we conduct the convergence analysis by employing Lyapunov stability theory. It shows that the proposed algorithm can make the states achieve consensus at the optimal point. In the end, nontrivial simulations are presented and the results demonstrate the effectiveness of the proposed algorithm.
Original language | English |
---|---|
Article number | 8807129 |
Pages (from-to) | 1800-1806 |
Number of pages | 7 |
Journal | IEEE Transactions on Automatic Control |
Volume | 65 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2020 |
Keywords
- Distributed optimization
- nonsmooth convex optimization
- primal dual
- proximal operator