Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering

Shiye Wang, Changsheng Li*, Yanming Li, Ye Yuan, Guoren Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

34 Citations (Scopus)

Abstract

In this paper, we explore the problem of deep multi-view subspace clustering framework from an information-theoretic point of view. We extend the traditional information bottleneck principle to learn common information among different views in a self-supervised manner, and accordingly establish a new framework called Self-supervised Information Bottleneck based Multi-view Subspace Clustering (SIB-MSC). Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views. Actually, the latent representation of each view provides a kind of self-supervised signal for training the latent representations of other views. Moreover, SIB-MSC attempts to disengage the other latent space for each view to capture the view-specific information by introducing mutual information based regularization terms, so as to further improve the performance of multi-view subspace clustering. Extensive experiments on real-world multi-view data demonstrate that our method achieves superior performance over the related state-of-the-art methods.

Original languageEnglish
Pages (from-to)1555-1567
Number of pages13
JournalIEEE Transactions on Image Processing
Volume32
DOIs
Publication statusPublished - 2023

Keywords

  • Information bottleneck
  • multi-view
  • self-supervised learning
  • subspace clustering

Fingerprint

Dive into the research topics of 'Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering'. Together they form a unique fingerprint.

Cite this