Revisiting linear machine learning through the perspective of inverse problems

Shuang Liu, Sergey Kabanikhin, Sergei Strijhak*, Ying Ao Wang, Ye Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we revisit Linear Neural Networks (LNNs) with single-output neurons performing linear operations. The study focuses on constructing an optimal regularized weight matrix Q from training pairs { G, H } {\{G,H\}}, reformulating the LNNs framework as matrix equations, and addressing it as a linear inverse problem. The ill-posedness of linear machine learning problems is analyzed through the lens of inverse problems. Furthermore, classical and modern regularization techniques from both the machine learning and inverse problems communities are reviewed. The effectiveness of LNNs is demonstrated through a real-world application in blood test classification, highlighting their practical value in solving real-life problems.

Original languageEnglish
Pages (from-to)281-303
Number of pages23
JournalJournal of Inverse and Ill-Posed Problems
Volume33
Issue number2
DOIs
Publication statusPublished - 1 Apr 2025
Externally publishedYes

Keywords

  • linear inverse and ill-posed problems
  • linear neural network
  • Machine learning
  • regularization

Fingerprint

Dive into the research topics of 'Revisiting linear machine learning through the perspective of inverse problems'. Together they form a unique fingerprint.

Cite this