TY - JOUR
T1 - DWCL
T2 - Dual-Weighted Contrastive Learning for robust multi-view clustering
AU - Yuan, Hanning
AU - Zhang, Zhihui
AU - Guo, Qi
AU - Chi, Lianhua
AU - Ruan, Sijie
AU - Zhou, Wei
AU - Pang, Jinhui
AU - Hao, Xiaoshuai
N1 - Publisher Copyright:
© 2025 Elsevier Ltd.
PY - 2026/2/1
Y1 - 2026/2/1
N2 - Multi-view contrastive clustering (MVCC) aims to learn consistent clustering structures from multiple views by maximizing the agreement between view-specific representations. However, existing methods often construct all pairwise cross-views indiscriminately, leading to numerous unreliable view combinations and representation degeneration. To address these issues, we propose Dual-Weighted Contrastive Learning (DWCL), a novel framework that selects the most reliable view using the silhouette coefficient and constructs targeted cross-views with other views via a Best-Other (B-O) contrastive mechanism. This strategy reduces the number of cross-views from quadratic to linear complexity, significantly improving computational efficiency. Additionally, we introduce a dual-weighting strategy that combines a view quality weight and a view discrepancy weight to adaptively emphasize high-quality, low-discrepancy cross-views. Extensive experiments on eight multi-view datasets demonstrate that DWCL consistently outperforms state-of-the-art methods. Specifically, DWCL achieves an absolute accuracy improvement of 3.5% on Caltech5V7 and 4.4% on CIFAR10. Theoretical analysis further validates the advantages of DWCL in improving mutual information bounds and reducing the influence of low-quality views. These results confirm that DWCL is a robust and efficient solution for scalable multi-view clustering.
AB - Multi-view contrastive clustering (MVCC) aims to learn consistent clustering structures from multiple views by maximizing the agreement between view-specific representations. However, existing methods often construct all pairwise cross-views indiscriminately, leading to numerous unreliable view combinations and representation degeneration. To address these issues, we propose Dual-Weighted Contrastive Learning (DWCL), a novel framework that selects the most reliable view using the silhouette coefficient and constructs targeted cross-views with other views via a Best-Other (B-O) contrastive mechanism. This strategy reduces the number of cross-views from quadratic to linear complexity, significantly improving computational efficiency. Additionally, we introduce a dual-weighting strategy that combines a view quality weight and a view discrepancy weight to adaptively emphasize high-quality, low-discrepancy cross-views. Extensive experiments on eight multi-view datasets demonstrate that DWCL consistently outperforms state-of-the-art methods. Specifically, DWCL achieves an absolute accuracy improvement of 3.5% on Caltech5V7 and 4.4% on CIFAR10. Theoretical analysis further validates the advantages of DWCL in improving mutual information bounds and reducing the influence of low-quality views. These results confirm that DWCL is a robust and efficient solution for scalable multi-view clustering.
KW - Contrastive learning
KW - Multi-view clustering
KW - Weighting strategy
UR - https://www.scopus.com/pages/publications/105024877124
U2 - 10.1016/j.engappai.2025.113532
DO - 10.1016/j.engappai.2025.113532
M3 - Article
AN - SCOPUS:105024877124
SN - 0952-1976
VL - 165
JO - Engineering Applications of Artificial Intelligence
JF - Engineering Applications of Artificial Intelligence
M1 - 113532
ER -