TY - JOUR
T1 - Tensor train decomposition for solving large-scale linear equations
AU - Chen, Hengnu
AU - Deng, Lei
AU - Qu, Zheng
AU - Liang, Ling
AU - Yan, Tianyi
AU - Xie, Yuan
AU - Li, Guoqi
N1 - Publisher Copyright:
© 2021
PY - 2021/11/13
Y1 - 2021/11/13
N2 - Solving large-scale linear equations is an important problem in signal processing, machine vision, financial mathematics, quantum physics, chemistry, and many other areas. Due to the curse of dimensionality, classic numerical treatments of such problems are difficult and inefficient, which are usually addressed by low rank approximation methods. Tensor train decomposition (TTD) is one of these methods that can reduce the execution cost theoretically. In this work, we present a TTD based batch alternating least squares (BALS) method, termed as TTD-BALS, for solving large-scale linear equations. TTD-BALS makes the computation efficient and stable, since it can convert the large-scale optimization problem into sequential optimization subproblems with smaller scale. We further uncover that all existing TTD-based methods together with our TTD-BALS are not universal and can be troubled by local minima, and therefore they are not suitable for general and arbitrary linear equations. Fortunately, based on our analysis, if some specific requirements we identified can be satisfied, TTD-BALS is able to provide a very good solution. In fact these requirements can be satisfied in a wide range of domains such as signal processing and scientific computing. Furthermore, we prove that the proposed method is able to reduce the computational complexity, and conduct numerical experiments to confirm its effectiveness.
AB - Solving large-scale linear equations is an important problem in signal processing, machine vision, financial mathematics, quantum physics, chemistry, and many other areas. Due to the curse of dimensionality, classic numerical treatments of such problems are difficult and inefficient, which are usually addressed by low rank approximation methods. Tensor train decomposition (TTD) is one of these methods that can reduce the execution cost theoretically. In this work, we present a TTD based batch alternating least squares (BALS) method, termed as TTD-BALS, for solving large-scale linear equations. TTD-BALS makes the computation efficient and stable, since it can convert the large-scale optimization problem into sequential optimization subproblems with smaller scale. We further uncover that all existing TTD-based methods together with our TTD-BALS are not universal and can be troubled by local minima, and therefore they are not suitable for general and arbitrary linear equations. Fortunately, based on our analysis, if some specific requirements we identified can be satisfied, TTD-BALS is able to provide a very good solution. In fact these requirements can be satisfied in a wide range of domains such as signal processing and scientific computing. Furthermore, we prove that the proposed method is able to reduce the computational complexity, and conduct numerical experiments to confirm its effectiveness.
KW - Computational complexity
KW - Large-scale linear equations
KW - Tensor contraction
KW - Tensor network diagram
KW - Tensor train decomposition
UR - http://www.scopus.com/inward/record.url?scp=85114125424&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2021.08.034
DO - 10.1016/j.neucom.2021.08.034
M3 - Article
AN - SCOPUS:85114125424
SN - 0925-2312
VL - 464
SP - 203
EP - 217
JO - Neurocomputing
JF - Neurocomputing
ER -