Learning Proximal Operator Methods for Nonconvex Sparse Recovery with Theoretical Guarantee

Chengzhu Yang, Yuantao Gu*, Badong Chen, Hongbing Ma, Hing Cheung So

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

23 Citations (Scopus)

Abstract

Sparse recovery has attracted considerable attention in signal processing community these years, because of its widespread usage in many applications. Though lots of convex and nonconvex methods have been proposed for this topic, most of them are computationally expensive. Recently, learned iterative shrinkage-thresholding algorithm (LISTA) and its variants are incorporated to sparse recovery, which are based on specific deep neural networks designed by unfolding ISTA. However, the problem they are able to solve is regularized by \ell _1 norm, which is less effective at promoting sparseness than some nonconvex sparsity-inducing penalties (SIPs). Motivated by the fact that the problems regularized by these SIPs can be solved by proximal operator methods, we devise a network named learning proximal operator method (LePOM) for sparse recovery. For LePOM on a general class of SIPs, a necessary condition of its convergence is established. Based on this condition, a simplified network is proposed with less parameters. Theoretical analysis of this network inspires us to further reduce the number of parameters and arrive at proposing Analytical LePOM (ALePOM). ALePOM determines most parameters by solving an optimization problem and significantly reduces the number of parameters. Theoretical analysis shows if the signal is sufficiently sparse, ALePOM converges linearly. Simulations confirm our analyses and demonstrate that the proposed solutions outperform state-of-the-art sparse recovery algorithms and neural network-based methods.

Original languageEnglish
Article number9025003
Pages (from-to)5244-5259
Number of pages16
JournalIEEE Transactions on Signal Processing
Volume68
DOIs
Publication statusPublished - 2020
Externally publishedYes

Keywords

  • neural network
  • nonconvex sparsity-inducing penalty
  • proximal operator method
  • Sparse recovery
  • unfolding iterative algorithm

Fingerprint

Dive into the research topics of 'Learning Proximal Operator Methods for Nonconvex Sparse Recovery with Theoretical Guarantee'. Together they form a unique fingerprint.

Cite this