TY - JOUR
T1 - Distributed computation of linear matrix equations
T2 - An optimization perspective
AU - Zeng, Xianlin
AU - Liang, Shu
AU - Hong, Yiguang
AU - Chen, Jie
N1 - Publisher Copyright:
© 1963-2012 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - This paper investigates the distributed computation of the well-known linear matrix equation in the form of AXB = F, with the matrices A, B, X, and F of appropriate dimensions, over multiagent networks from an optimization perspective. In this paper, we consider the standard distributed matrix-information structures, where each agent of the considered multiagent network has access to one of the subblock matrices of A, B, and F To be specific, we first propose different decomposition methods to reformulate the matrix equations in standard structures as distributed constrained optimization problems by introducing substitutional variables; we show that the solutions of the reformulated distributed optimization problems are equivalent to least squares solutions to original matrix equations; and we design distributed continuous-time algorithms for the constrained optimization problems, even by using augmented matrices and a derivative feedback technique. Moreover, we prove the exponential convergence of the algorithms to a least squares solution to the matrix equation for any initial condition.
AB - This paper investigates the distributed computation of the well-known linear matrix equation in the form of AXB = F, with the matrices A, B, X, and F of appropriate dimensions, over multiagent networks from an optimization perspective. In this paper, we consider the standard distributed matrix-information structures, where each agent of the considered multiagent network has access to one of the subblock matrices of A, B, and F To be specific, we first propose different decomposition methods to reformulate the matrix equations in standard structures as distributed constrained optimization problems by introducing substitutional variables; we show that the solutions of the reformulated distributed optimization problems are equivalent to least squares solutions to original matrix equations; and we design distributed continuous-time algorithms for the constrained optimization problems, even by using augmented matrices and a derivative feedback technique. Moreover, we prove the exponential convergence of the algorithms to a least squares solution to the matrix equation for any initial condition.
KW - Constrained convex optimization
KW - distributed computation
KW - least squares solution
KW - linear matrix equation
KW - substitutional decomposition
UR - http://www.scopus.com/inward/record.url?scp=85048591432&partnerID=8YFLogxK
U2 - 10.1109/TAC.2018.2847603
DO - 10.1109/TAC.2018.2847603
M3 - Article
AN - SCOPUS:85048591432
SN - 0018-9286
VL - 64
SP - 1858
EP - 1873
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
IS - 5
M1 - 8385114
ER -