TY - JOUR
T1 - Learning Proximal Operator Methods for Nonconvex Sparse Recovery with Theoretical Guarantee
AU - Yang, Chengzhu
AU - Gu, Yuantao
AU - Chen, Badong
AU - Ma, Hongbing
AU - So, Hing Cheung
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2020
Y1 - 2020
N2 - Sparse recovery has attracted considerable attention in signal processing community these years, because of its widespread usage in many applications. Though lots of convex and nonconvex methods have been proposed for this topic, most of them are computationally expensive. Recently, learned iterative shrinkage-thresholding algorithm (LISTA) and its variants are incorporated to sparse recovery, which are based on specific deep neural networks designed by unfolding ISTA. However, the problem they are able to solve is regularized by \ell _1 norm, which is less effective at promoting sparseness than some nonconvex sparsity-inducing penalties (SIPs). Motivated by the fact that the problems regularized by these SIPs can be solved by proximal operator methods, we devise a network named learning proximal operator method (LePOM) for sparse recovery. For LePOM on a general class of SIPs, a necessary condition of its convergence is established. Based on this condition, a simplified network is proposed with less parameters. Theoretical analysis of this network inspires us to further reduce the number of parameters and arrive at proposing Analytical LePOM (ALePOM). ALePOM determines most parameters by solving an optimization problem and significantly reduces the number of parameters. Theoretical analysis shows if the signal is sufficiently sparse, ALePOM converges linearly. Simulations confirm our analyses and demonstrate that the proposed solutions outperform state-of-the-art sparse recovery algorithms and neural network-based methods.
AB - Sparse recovery has attracted considerable attention in signal processing community these years, because of its widespread usage in many applications. Though lots of convex and nonconvex methods have been proposed for this topic, most of them are computationally expensive. Recently, learned iterative shrinkage-thresholding algorithm (LISTA) and its variants are incorporated to sparse recovery, which are based on specific deep neural networks designed by unfolding ISTA. However, the problem they are able to solve is regularized by \ell _1 norm, which is less effective at promoting sparseness than some nonconvex sparsity-inducing penalties (SIPs). Motivated by the fact that the problems regularized by these SIPs can be solved by proximal operator methods, we devise a network named learning proximal operator method (LePOM) for sparse recovery. For LePOM on a general class of SIPs, a necessary condition of its convergence is established. Based on this condition, a simplified network is proposed with less parameters. Theoretical analysis of this network inspires us to further reduce the number of parameters and arrive at proposing Analytical LePOM (ALePOM). ALePOM determines most parameters by solving an optimization problem and significantly reduces the number of parameters. Theoretical analysis shows if the signal is sufficiently sparse, ALePOM converges linearly. Simulations confirm our analyses and demonstrate that the proposed solutions outperform state-of-the-art sparse recovery algorithms and neural network-based methods.
KW - neural network
KW - nonconvex sparsity-inducing penalty
KW - proximal operator method
KW - Sparse recovery
KW - unfolding iterative algorithm
UR - http://www.scopus.com/inward/record.url?scp=85092164244&partnerID=8YFLogxK
U2 - 10.1109/TSP.2020.2978615
DO - 10.1109/TSP.2020.2978615
M3 - Article
AN - SCOPUS:85092164244
SN - 1053-587X
VL - 68
SP - 5244
EP - 5259
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
M1 - 9025003
ER -