Abstract
Preserving global and local structures during projection learning is very important for feature extraction. Although various methods have been proposed for this goal, they commonly introduce an extra graph regularization term and the corresponding regularization parameter that needs to be tuned. However, tuning the parameter manually not only is time-consuming, but also is difficult to find the optimal value to obtain a satisfactory performance. This greatly limits their applications. Besides, projections learned by many methods do not have good interpretability and their performances are commonly sensitive to the value of the selected feature dimension. To solve the above problems, a novel method named low-rank preserving projection via graph regularized reconstruction (LRPP_GRR) is proposed. In particular, LRPP_GRR imposes the graph constraint on the reconstruction error of data instead of introducing the extra regularization term to capture the local structure of data, which can greatly reduce the complexity of the model. Meanwhile, a low-rank reconstruction term is exploited to preserve the global structure of data. To improve the interpretability of the learned projection, a sparse term with l 2,1 norm is imposed on the projection. Furthermore, we introduce an orthogonal reconstruction constraint to make the learned projection hold main energy of data, which enables LRPP_GRR to be more flexible in the selection of feature dimension. Extensive experimental results show the proposed method can obtain competitive performance with other state-of-the-art methods.
Original language | English |
---|---|
Article number | 8293687 |
Pages (from-to) | 1279-1291 |
Number of pages | 13 |
Journal | IEEE Transactions on Cybernetics |
Volume | 49 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2019 |
Externally published | Yes |
Keywords
- Feature extraction
- Feature selection
- Graph regularization
- Low-rank representation (LRR)