2-D learned proximal gradient algorithm for fast sparse matrix recovery

Chengzhu Yang, Yuantao Gu*, Badong Chen, Hongbing Ma, Hing Cheung So

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

Many real-world problems can be modeled as sparse matrix recovery from two-dimensional (2D) measurements, which is recognized as one of the most important topics in signal processing community. Benefited from the roaring success of compressed sensing, many classical iterative algorithms can be directly applied or reinvented for matrix recovery, though they are computationally expensive. To alleviate this, we propose a neural network named 2D learned proximal gradient algorithm (2D-LPGA), which aims to quickly reconstruct the target matrix. Theoretical analysis reveals that if the parameters of the network satisfy certain conditions, it can reconstruct the sparse signal with linear convergence rate. Moreover, numerical experiments demonstrate the superiority of the proposed method over other classical schemes.

Original languageEnglish
Article number9200730
Pages (from-to)1492-1496
Number of pages5
JournalIEEE Transactions on Circuits and Systems II: Express Briefs
Volume68
Issue number4
DOIs
Publication statusPublished - Apr 2021
Externally publishedYes

Keywords

  • Neural network
  • Proximal gradient
  • Sparse matrix recovery
  • Unfolding

Fingerprint

Dive into the research topics of '2-D learned proximal gradient algorithm for fast sparse matrix recovery'. Together they form a unique fingerprint.

Cite this