Inexact proximal gradient algorithm with random reshuffling for nonsmooth optimization

Xia Jiang, Yanyan Fang, Xianlin Zeng*, Jian Sun, Jie Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Proximal gradient algorithms are popularly implemented to achieve convex optimization with nonsmooth regularization. Obtaining the exact solution of the proximal operator for nonsmooth regularization is challenging because errors exist in the computation of the gradient; consequently, the design and application of inexact proximal gradient algorithms have attracted considerable attention from researchers. This paper proposes computationally efficient basic and inexact proximal gradient descent algorithms with random reshuffling. The proposed stochastic algorithms take randomly reshuffled data to perform successive gradient descents and implement only one proximal operator after all data pass through. We prove the convergence results of the proposed proximal gradient algorithms under the sampling-without-replacement reshuffling scheme. When computational errors exist in gradients and proximal operations, the proposed inexact proximal gradient algorithms can converge to an optimal solution neighborhood. Finally, we apply the proposed algorithms to compressed sensing and compare their efficiency with some popular algorithms.

Original languageEnglish
Article number112201
JournalScience China Information Sciences
Volume68
Issue number1
DOIs
Publication statusPublished - Jan 2025

Keywords

  • compressed sensing
  • inexact computation
  • nonsmooth optimization
  • proximal operator
  • random reshuffling

Cite this