TY - JOUR
T1 - Towards Very Deep Representation Learning for Subspace Clustering
AU - Li, Yanming
AU - Wang, Shiye
AU - Li, Changsheng
AU - Yuan, Ye
AU - Wang, Guoren
N1 - Publisher Copyright:
IEEE
PY - 2024
Y1 - 2024
N2 - Deep subspace clustering based on the self-expressive layer has attracted increasing attention in recent years. Due to the self-expressive layer, these methods need to load the whole dataset into one batch for learning the self-expressive coefficients. Such a learning strategy puts a great burden on memory, which severely prevents from the usage of deeper network architectures (e.g., ResNet), and becomes a bottleneck for applying to large-scale data. In this paper, we propose a new deep subspace clustering framework, in order to address the above challenges. In contrast to previous approaches taking the weights of a fully connected layer as the self-expressive coefficients, we attempt to obtain the self-expressive coefficients by learning an energy based network in a mini-batch training manner. By this means, it is no longer necessary to load all data into one batch for learning, thus avoiding the above issue. Considering the powerful representation ability of the recently popular self-supervised learning, we leverage self-supervised representation learning to learn the dictionary for representing data. Finally, we propose a joint framework to learn both the self-expressive coefficients and the dictionary simultaneously. Extensive experiments on three publicly available datasets demonstrate the effectiveness of our method.
AB - Deep subspace clustering based on the self-expressive layer has attracted increasing attention in recent years. Due to the self-expressive layer, these methods need to load the whole dataset into one batch for learning the self-expressive coefficients. Such a learning strategy puts a great burden on memory, which severely prevents from the usage of deeper network architectures (e.g., ResNet), and becomes a bottleneck for applying to large-scale data. In this paper, we propose a new deep subspace clustering framework, in order to address the above challenges. In contrast to previous approaches taking the weights of a fully connected layer as the self-expressive coefficients, we attempt to obtain the self-expressive coefficients by learning an energy based network in a mini-batch training manner. By this means, it is no longer necessary to load all data into one batch for learning, thus avoiding the above issue. Considering the powerful representation ability of the recently popular self-supervised learning, we leverage self-supervised representation learning to learn the dictionary for representing data. Finally, we propose a joint framework to learn both the self-expressive coefficients and the dictionary simultaneously. Extensive experiments on three publicly available datasets demonstrate the effectiveness of our method.
KW - Data models
KW - Deep learning
KW - Dictionaries
KW - Load modeling
KW - Representation Learning
KW - Representation learning
KW - Self-Supervised Learning
KW - Self-supervised learning
KW - Subspace Clustering
KW - Training
UR - http://www.scopus.com/inward/record.url?scp=85184795684&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2024.3362984
DO - 10.1109/TKDE.2024.3362984
M3 - Article
AN - SCOPUS:85184795684
SN - 1041-4347
SP - 1
EP - 13
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
ER -