Stochastic linear regularization methods: random discrepancy principle and applications

Ye Zhang, Chuchu Chen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

The a posteriori stopping rule plays a significant role in the design of efficient stochastic algorithms for various tasks in computational mathematics, such as inverse problems, optimization, and machine learning. Through the lens of classical regularization theory, this paper describes a novel analysis of Morozov’s discrepancy principle for the stochastic generalized Landweber iteration and its continuous analog of generalized stochastic asymptotical regularization. Unlike existing results relating to convergence in probability, we prove the strong convergence of the regularization error using tools from stochastic analysis, namely the theory of martingales. Numerical experiments are conducted to verify the convergence of the discrepancy principle and demonstrate two new capabilities of stochastic generalized Landweber iteration, which should also be valid for other stochastic/statistical approaches: improved accuracy by selecting the optimal path and the identification of multi-solutions by clustering samples of obtained approximate solutions.

Original languageEnglish
Article number025007
JournalInverse Problems
Volume40
Issue number2
DOIs
Publication statusPublished - Feb 2024

Keywords

  • Ill-posed problems
  • convergence
  • martingales
  • random discrepancy principle
  • stochastic linear regularization method

Fingerprint

Dive into the research topics of 'Stochastic linear regularization methods: random discrepancy principle and applications'. Together they form a unique fingerprint.

Cite this