Abstract
Orthogonality has been demonstrated to admit many desirable properties such as noise-tolerant, good for data visualization, and preserving distances. However, it is often incompatible with existing models and the resulting optimization problem is challenging even if compatible. To address these issues, we propose a trace ratio formulation for multi-view subspace learning to learn individual orthogonal projections for all views. The proposed formulation integrates the correlations within multiple views, supervised discriminant capacity, and distance preservation in a concise and compact way. It not only includes several existing models as special cases, but also inspires new models. Moreover, an efficient numerical method based on successive approximations via eigenvectors is presented to solve the associated optimization problem. The method is built upon an iterative Krylov subspace method which can easily scale up for high-dimensional datasets. Extensive experiments are conducted on various real-world datasets for multi-view discriminant analysis and multi-view multi-label classification. The experimental results demonstrate that the proposed models are consistently competitive to and often better than the compared methods.
Original language | English |
---|---|
Pages (from-to) | 100-116 |
Number of pages | 17 |
Journal | Neurocomputing |
Volume | 512 |
DOIs | |
Publication status | Published - 1 Nov 2022 |
Externally published | Yes |
Keywords
- Krylov subspace method
- Orthogonal multi-view analysis
- successive approximations via eigenvectors