On the acceleration of optimal regularization algorithms for linear ill-posed inverse problems

Ye Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Accelerated regularization algorithms for ill-posed problems have received much attention from researchers of inverse problems since the 1980s. The current optimal theoretical results indicate that some regularization algorithms, e.g. the ν-method and the Nesterov method, are such that under conventional source conditions the optimal convergence rates can be obtained with approximately the square root of the iterations of those needed for the benchmark (i.e. the Landweber iteration). In this paper, we propose a new class of regularization algorithms with parameter n, called the Acceleration Regularization of order n (ARn). Theoretically, we prove that, for an arbitrary number n> - 1 , ARn can attach the optimal convergence rates with approximately the n+ 1 root of the iterations needed for the benchmark method. Moreover, unlike the existing accelerated regularization algorithms, ARns have no saturation restriction. Some symplectic iterative regularizing algorithms are developed for the numerical realization of ARn. Finally, numerical experiments with integral equations and inverse problems in partial differential equations demonstrate that, at least for n≤ 2 , the numerical behavior of ARn matches our theoretical findings, also breaking the practical acceleration capability of all existing regularization algorithms.

Original languageEnglish
Article number6
JournalCalcolo
Volume60
Issue number1
DOIs
Publication statusPublished - Mar 2023

Keywords

  • Acceleration
  • Convergence rate
  • Inverse problems
  • Regularization

Fingerprint

Dive into the research topics of 'On the acceleration of optimal regularization algorithms for linear ill-posed inverse problems'. Together they form a unique fingerprint.

Cite this