Abstract
In this paper, we revisit Linear Neural Networks (LNNs) with single-output neurons performing linear operations. The study focuses on constructing an optimal regularized weight matrix Q from training pairs { G, H } {\{G,H\}}, reformulating the LNNs framework as matrix equations, and addressing it as a linear inverse problem. The ill-posedness of linear machine learning problems is analyzed through the lens of inverse problems. Furthermore, classical and modern regularization techniques from both the machine learning and inverse problems communities are reviewed. The effectiveness of LNNs is demonstrated through a real-world application in blood test classification, highlighting their practical value in solving real-life problems.
Original language | English |
---|---|
Pages (from-to) | 281-303 |
Number of pages | 23 |
Journal | Journal of Inverse and Ill-Posed Problems |
Volume | 33 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Apr 2025 |
Externally published | Yes |
Keywords
- linear inverse and ill-posed problems
- linear neural network
- Machine learning
- regularization