TY - JOUR
T1 - Learning Uniform Latent Representation via Alternating Adversarial Network for Multi-View Clustering
AU - Zhang, Yue
AU - Huang, Weitian
AU - Zhang, Xiaoxue
AU - Yang, Sirui
AU - Zhang, Fa
AU - Gao, Xin
AU - Cai, Hongmin
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2025
Y1 - 2025
N2 - Multi-view clustering aims at exploiting complementary information contained in different views to partition samples into distinct categories. The popular approaches either directly integrate features from different views, or capture the common portion between views without closing the heterogeneity gap. Such rigid schemes did not consider the possible mis-alignment among different views, thus failing to learn a consistent yet comprehensive representation, leading to inferior clustering performance. To tackle the drawback, we introduce an alternating adversarial learning strategy to drive different views to fall into the same semantic space. We first present a Linear Alternating Adversarial Multi-view Clustering (Linear-A2MC) model to align views in linear embedding spaces. To enjoy the power of feature extraction capability of deep networks, we further build a Deep Alternating Adversarial Multi-view Clustering (Deep-A2MC) network to realize non-linear transformations and feature pruning among different views, simultaneously. Specifically, Deep-A2MC leverages alternate adversarial learning to first align low-dimensional embedding distributions, followed by a mixture of latent representations synthesized through attention learning for multiple views. Finally, a self-supervised clustering loss is jointly optimized in the unified network to guide the learning of discriminative representations to yield compact clusters. Extensive experiments on six real world datasets with largely varied sample sizes demonstrate that Deep-A2MC achieved superior clustering performance by comparing with twelve baseline methods.
AB - Multi-view clustering aims at exploiting complementary information contained in different views to partition samples into distinct categories. The popular approaches either directly integrate features from different views, or capture the common portion between views without closing the heterogeneity gap. Such rigid schemes did not consider the possible mis-alignment among different views, thus failing to learn a consistent yet comprehensive representation, leading to inferior clustering performance. To tackle the drawback, we introduce an alternating adversarial learning strategy to drive different views to fall into the same semantic space. We first present a Linear Alternating Adversarial Multi-view Clustering (Linear-A2MC) model to align views in linear embedding spaces. To enjoy the power of feature extraction capability of deep networks, we further build a Deep Alternating Adversarial Multi-view Clustering (Deep-A2MC) network to realize non-linear transformations and feature pruning among different views, simultaneously. Specifically, Deep-A2MC leverages alternate adversarial learning to first align low-dimensional embedding distributions, followed by a mixture of latent representations synthesized through attention learning for multiple views. Finally, a self-supervised clustering loss is jointly optimized in the unified network to guide the learning of discriminative representations to yield compact clusters. Extensive experiments on six real world datasets with largely varied sample sizes demonstrate that Deep-A2MC achieved superior clustering performance by comparing with twelve baseline methods.
KW - adversarial learning
KW - deep clustering
KW - Multi-view clustering
KW - representation learning
UR - http://www.scopus.com/inward/record.url?scp=86000660033&partnerID=8YFLogxK
U2 - 10.1109/TETCI.2025.3540426
DO - 10.1109/TETCI.2025.3540426
M3 - Article
AN - SCOPUS:86000660033
SN - 2471-285X
JO - IEEE Transactions on Emerging Topics in Computational Intelligence
JF - IEEE Transactions on Emerging Topics in Computational Intelligence
ER -