TY - JOUR
T1 - Probabilistic Dimensionality Reduction via Structure Learning
AU - Wang, Li
AU - Mao, Qi
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2019/1/1
Y1 - 2019/1/1
N2 - We propose an alternative probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a set of embedding points in a low-dimensional space by retaining the inherent structure from high-dimensional data. The objective function of this new model can be equivalently interpreted as two coupled learning problems, i.e., structure learning and the learning of projection matrix. Inspired by this interesting interpretation, we propose another model, which finds a set of embedding points that can directly form an explicit graph structure. We proved that the model by learning explicit graphs generalizes the reversed graph embedding method, but leads to a natural interpretation from Bayesian perspective. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to retain the inherent structure of datasets and achieve competitive quantitative results in terms of various performance evaluation criteria.
AB - We propose an alternative probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a set of embedding points in a low-dimensional space by retaining the inherent structure from high-dimensional data. The objective function of this new model can be equivalently interpreted as two coupled learning problems, i.e., structure learning and the learning of projection matrix. Inspired by this interesting interpretation, we propose another model, which finds a set of embedding points that can directly form an explicit graph structure. We proved that the model by learning explicit graphs generalizes the reversed graph embedding method, but leads to a natural interpretation from Bayesian perspective. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to retain the inherent structure of datasets and achieve competitive quantitative results in terms of various performance evaluation criteria.
KW - latent variable model
KW - Nonlinear dimensionality reduction
KW - probabilistic models
KW - structure learning
UR - http://www.scopus.com/inward/record.url?scp=85039786236&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2017.2785402
DO - 10.1109/TPAMI.2017.2785402
M3 - Article
C2 - 29990039
AN - SCOPUS:85039786236
SN - 0162-8828
VL - 41
SP - 205
EP - 219
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 1
M1 - 8226989
ER -