TY - GEN
T1 - Graph Information Interaction on Feature and Structure via Cross-modal Contrastive Learning
AU - Wen, Jinyong
AU - Wang, Yuhu
AU - Zhang, Chunxia
AU - Xiang, Shiming
AU - Pan, Chunhong
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The abundant features and structure information on graphs provide a potential guarantee for learning high-quality representations without supervision. Feature attribute represents the inherent properties of nodes, while structure attribute describes their neighborhood relationship. These two types of attributes can be regarded as different modal forms of the same instance and should be consistent in identifying a member. We propose to directly regard feature and structure attributes as two separate views to embed this consistency into contrastive learning method, realizing graph information interaction on feature and structure in a cross-modal contrastive framework. Under this framework, node representations are learned in an unsupervised manner by maximizing the agreement between feature representation and structure representation. In terms of negative samples, instead of randomly sampling points from empirical distribution, a simple yet effective multi-sample mixing strategy is proposed to synthesize true negative samples with greater probability, alleviating the tricky false negative issue. Extensive experiments on multiple types of graphs demonstrate the effectiveness of the proposed method.
AB - The abundant features and structure information on graphs provide a potential guarantee for learning high-quality representations without supervision. Feature attribute represents the inherent properties of nodes, while structure attribute describes their neighborhood relationship. These two types of attributes can be regarded as different modal forms of the same instance and should be consistent in identifying a member. We propose to directly regard feature and structure attributes as two separate views to embed this consistency into contrastive learning method, realizing graph information interaction on feature and structure in a cross-modal contrastive framework. Under this framework, node representations are learned in an unsupervised manner by maximizing the agreement between feature representation and structure representation. In terms of negative samples, instead of randomly sampling points from empirical distribution, a simple yet effective multi-sample mixing strategy is proposed to synthesize true negative samples with greater probability, alleviating the tricky false negative issue. Extensive experiments on multiple types of graphs demonstrate the effectiveness of the proposed method.
KW - Cross-modal graph contrastive learning
KW - feature-structure consistency
KW - negative sampling
UR - https://www.scopus.com/pages/publications/85171140991
U2 - 10.1109/ICME55011.2023.00187
DO - 10.1109/ICME55011.2023.00187
M3 - Conference contribution
AN - SCOPUS:85171140991
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
SP - 1068
EP - 1073
BT - Proceedings - 2023 IEEE International Conference on Multimedia and Expo, ICME 2023
PB - IEEE Computer Society
T2 - 2023 IEEE International Conference on Multimedia and Expo, ICME 2023
Y2 - 10 July 2023 through 14 July 2023
ER -