TY - JOUR
T1 - Simpler Gradient Methods for Blind Super-Resolution With Lower Iteration Complexity
AU - Li, Jinsheng
AU - Cui, Wei
AU - Zhang, Xu
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - We study the problem of blind super-resolution, which can be formulated as a low-rank matrix recovery problem via vectorized Hankel lift (VHL). The previous gradient descent method based on VHL named PGD-VHL relies on additional regularization such as the projection and balancing penalty, exhibiting a suboptimal iteration complexity. In this paper, we propose a simpler unconstrained optimization problem without the above two types of regularization and develop two new and provable gradient methods named VGD-VHL and ScalGD-VHL. A novel and sharp analysis is provided for the theoretical guarantees of our algorithms, which demonstrates that our methods offer lower iteration complexity than PGD-VHL. In addition, ScalGD-VHL has the lowest iteration complexity while being independent of the condition number. Furthermore, our novel analysis reveals that the blind super-resolution problem is less incoherence-demanding, thereby eliminating the necessity for incoherent projections to achieve linear convergence. Empirical results illustrate that our methods exhibit superior computational efficiency while achieving comparable recovery performance to prior arts.
AB - We study the problem of blind super-resolution, which can be formulated as a low-rank matrix recovery problem via vectorized Hankel lift (VHL). The previous gradient descent method based on VHL named PGD-VHL relies on additional regularization such as the projection and balancing penalty, exhibiting a suboptimal iteration complexity. In this paper, we propose a simpler unconstrained optimization problem without the above two types of regularization and develop two new and provable gradient methods named VGD-VHL and ScalGD-VHL. A novel and sharp analysis is provided for the theoretical guarantees of our algorithms, which demonstrates that our methods offer lower iteration complexity than PGD-VHL. In addition, ScalGD-VHL has the lowest iteration complexity while being independent of the condition number. Furthermore, our novel analysis reveals that the blind super-resolution problem is less incoherence-demanding, thereby eliminating the necessity for incoherent projections to achieve linear convergence. Empirical results illustrate that our methods exhibit superior computational efficiency while achieving comparable recovery performance to prior arts.
KW - Blind super-resolution
KW - low-rank matrix factorization
KW - scaled gradient descent
KW - vanilla gradient descent
UR - http://www.scopus.com/inward/record.url?scp=85205782857&partnerID=8YFLogxK
U2 - 10.1109/TSP.2024.3470071
DO - 10.1109/TSP.2024.3470071
M3 - Article
AN - SCOPUS:85205782857
SN - 1053-587X
VL - 72
SP - 5123
EP - 5139
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -