Probabilistic Dimensionality Reduction via Structure Learning

Li Wang*, Qi Mao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Citations (Scopus)

Abstract

We propose an alternative probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a set of embedding points in a low-dimensional space by retaining the inherent structure from high-dimensional data. The objective function of this new model can be equivalently interpreted as two coupled learning problems, i.e., structure learning and the learning of projection matrix. Inspired by this interesting interpretation, we propose another model, which finds a set of embedding points that can directly form an explicit graph structure. We proved that the model by learning explicit graphs generalizes the reversed graph embedding method, but leads to a natural interpretation from Bayesian perspective. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to retain the inherent structure of datasets and achieve competitive quantitative results in terms of various performance evaluation criteria.

Original languageEnglish
Article number8226989
Pages (from-to)205-219
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number1
DOIs
Publication statusPublished - 1 Jan 2019
Externally publishedYes

Keywords

  • Nonlinear dimensionality reduction
  • latent variable model
  • probabilistic models
  • structure learning

Fingerprint

Dive into the research topics of 'Probabilistic Dimensionality Reduction via Structure Learning'. Together they form a unique fingerprint.

Cite this