Tensor train decomposition for solving large-scale linear equations

Hengnu Chen, Lei Deng, Zheng Qu, Ling Liang, Tianyi Yan, Yuan Xie, Guoqi Li*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Solving large-scale linear equations is an important problem in signal processing, machine vision, financial mathematics, quantum physics, chemistry, and many other areas. Due to the curse of dimensionality, classic numerical treatments of such problems are difficult and inefficient, which are usually addressed by low rank approximation methods. Tensor train decomposition (TTD) is one of these methods that can reduce the execution cost theoretically. In this work, we present a TTD based batch alternating least squares (BALS) method, termed as TTD-BALS, for solving large-scale linear equations. TTD-BALS makes the computation efficient and stable, since it can convert the large-scale optimization problem into sequential optimization subproblems with smaller scale. We further uncover that all existing TTD-based methods together with our TTD-BALS are not universal and can be troubled by local minima, and therefore they are not suitable for general and arbitrary linear equations. Fortunately, based on our analysis, if some specific requirements we identified can be satisfied, TTD-BALS is able to provide a very good solution. In fact these requirements can be satisfied in a wide range of domains such as signal processing and scientific computing. Furthermore, we prove that the proposed method is able to reduce the computational complexity, and conduct numerical experiments to confirm its effectiveness.

Original languageEnglish
Pages (from-to)203-217
Number of pages15
JournalNeurocomputing
Volume464
DOIs
Publication statusPublished - 13 Nov 2021

Keywords

  • Computational complexity
  • Large-scale linear equations
  • Tensor contraction
  • Tensor network diagram
  • Tensor train decomposition

Fingerprint

Dive into the research topics of 'Tensor train decomposition for solving large-scale linear equations'. Together they form a unique fingerprint.

Cite this