Abstract
Graph out-of-distribution (OOD) generalization enables Graph Neural Networks (GNNs) to perform robustly in applications when confronted with new environments not encountered during training. For better realization of OOD generalization, current methods are increasingly focused on designing more complex causal inference models training from scratch, without considering the potential of existing pre-trained models, increased computational complexity. In this paper, we propose an efficient framework (EGOG), which uses prompt learning for pre-trained GNNs to extract causal subgraphs from input graph data, for enhancing the generalization ability of the pre-trained GNNs. Specifically, we design a graph causal prompt module to enhance the generalization ability of the pre-trained GNNs. First, we use graph prompt learning with a causal features generator that enables pre-trained GNNs to decouple causal features from spurious features in graph nodes. Then, considering that the structure of graph is also affected by distribution shift, we further design a causal structure generator, which reuses the pre-trained GNNs to analyze graph’s structure, decoupling causal substructures from spurious substructures. This approach enhances the pre-trained GNN’s ability to decouple causal subgraph from both the node and structure perspectives. Meantime, we employ gradient reversal layers and information bottleneck to encourage our method to more accurately disentangle causal subgraph from spurious subgraph. Experiment results show that EGOG can be applied to pre-trained GNNs to enable generalization ability. Compared with the state-of-the-art baseline, EGOG shows an average relative improvement of 2.73% on two synthetic datasets while using only average 4.38% of the trainable parameters.
| Original language | English |
|---|---|
| Journal | IEEE Transactions on Artificial Intelligence |
| DOIs | |
| Publication status | Accepted/In press - 2026 |
| Externally published | Yes |
Keywords
- Efficient Learning
- Graph Neural Networks
- Out-of-Distribution Generalization
- Prompt Learning
Fingerprint
Dive into the research topics of 'Efficient Out-of-Distribution Generalization for Pre-trained GNNs via Prompt Learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver