TY - JOUR
T1 - Representing Graphs via Gromov-Wasserstein Factorization
AU - Xu, Hongteng
AU - Liu, Jiachang
AU - Luo, Dixin
AU - Carin, Lawrence
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Graph representation is a challenging and significant problem for many real-world applications. In this work, we propose a novel paradigm called 'Gromov-Wasserstein Factorization' (GWF) to learn graph representations in a flexible and interpretable way. Given a set of graphs, whose correspondence between nodes is unknown and whose sizes can be different, our GWF model reconstructs each graph by a weighted combination of some 'graph factors' under a pseudo-metric called Gromov-Wasserstein (GW) discrepancy. This model leads to a new nonlinear factorization mechanism of the graphs. The graph factors are shared by all the graphs, which represent the typical patterns of the graphs' structures. The weights associated with each graph indicate the graph factors' contributions to the graph's reconstruction, which lead to a permutation-invariant graph representation. We learn the graph factors of the GWF model and the weights of the graphs jointly by minimizing the overall reconstruction error. When learning the model, we reparametrize the graph factors and the weights to unconstrained model parameters and simplify the backpropagation of gradient with the help of the envelope theorem. For the GW discrepancy (the critical training step), we consider two algorithms to compute it, which correspond to the proximal point algorithm (PPA) and Bregman alternating direction method of multipliers (BADMM), respectively. Furthermore, we propose some extensions of the GWF model, including (i) combining with a graph neural network and learning graph representations in an auto-encoding manner, (ii) representing the graphs with node attributes, and (iii) working as a regularizer for semi-supervised graph classification. Experiments on various datasets demonstrate that our GWF model is comparable to the state-of-The-Art methods. The graph representations derived by it perform well in graph clustering and classification tasks.
AB - Graph representation is a challenging and significant problem for many real-world applications. In this work, we propose a novel paradigm called 'Gromov-Wasserstein Factorization' (GWF) to learn graph representations in a flexible and interpretable way. Given a set of graphs, whose correspondence between nodes is unknown and whose sizes can be different, our GWF model reconstructs each graph by a weighted combination of some 'graph factors' under a pseudo-metric called Gromov-Wasserstein (GW) discrepancy. This model leads to a new nonlinear factorization mechanism of the graphs. The graph factors are shared by all the graphs, which represent the typical patterns of the graphs' structures. The weights associated with each graph indicate the graph factors' contributions to the graph's reconstruction, which lead to a permutation-invariant graph representation. We learn the graph factors of the GWF model and the weights of the graphs jointly by minimizing the overall reconstruction error. When learning the model, we reparametrize the graph factors and the weights to unconstrained model parameters and simplify the backpropagation of gradient with the help of the envelope theorem. For the GW discrepancy (the critical training step), we consider two algorithms to compute it, which correspond to the proximal point algorithm (PPA) and Bregman alternating direction method of multipliers (BADMM), respectively. Furthermore, we propose some extensions of the GWF model, including (i) combining with a graph neural network and learning graph representations in an auto-encoding manner, (ii) representing the graphs with node attributes, and (iii) working as a regularizer for semi-supervised graph classification. Experiments on various datasets demonstrate that our GWF model is comparable to the state-of-The-Art methods. The graph representations derived by it perform well in graph clustering and classification tasks.
KW - Graph representation
KW - factorizati7on model
KW - gromov-wasserstein discrepancy
KW - neural networks
KW - permutation-invariance
UR - http://www.scopus.com/inward/record.url?scp=85125340815&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2022.3153126
DO - 10.1109/TPAMI.2022.3153126
M3 - Article
C2 - 35196227
AN - SCOPUS:85125340815
SN - 0162-8828
VL - 45
SP - 999
EP - 1016
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 1
ER -