TY - JOUR
T1 - PriGraph
T2 - Defending against Inference Attacks on Graph Neural Networks via Policy-Based Adversarial Perturbations
AU - Shen, Meng
AU - Lu, Hao
AU - Gu, Aijing
AU - Li, Qi
AU - Xu, Ke
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Graph Neural Networks (GNNs) have been widely used in various domains, such as social networks and transportation networks. Previous research has shown that GNNs are vulnerable to inference attacks. Node Membership Inference Attacks (NMIA) on GNNs infer whether a set of graph data records belongs to the training graph data of a target model. Link Status Inference Attacks (LSIA) against GNNs aim to infer whether there exists a link between a pair of nodes in the graph used to train the target GNN model. Specifically, given black-box access to a GNN model, NMIAs and LSIAs are conducted by analyzing the outputs (e.g., confidence score vectors) from GNN models. The defense methods against these two score-based inference attacks face the challenges of achieving effective defense performance and maintaining the utility of GNN models. In this paper, we propose PriGraph, a defense mechanism to protect the node privacy and link privacy of training graph data, while maintaining the high accuracy of the target GNN models. PriGraph adds crafted adversarial perturbations to outputs of the target GNN model by deploying two key components, i.e., defense auxiliary classifier and adversarial perturbation generator, which are used to find the minimal adversarial perturbations that can reduce the attack accuracy while maintaining task performance of node classification. We evaluate PriGraph with different GNN models and multiple benchmark datasets. The results show that PriGraph can dramatically reduce the attack accuracy of NMIA and LSIA on GNNs, providing a superior trade-off between the model utility and privacy.
AB - Graph Neural Networks (GNNs) have been widely used in various domains, such as social networks and transportation networks. Previous research has shown that GNNs are vulnerable to inference attacks. Node Membership Inference Attacks (NMIA) on GNNs infer whether a set of graph data records belongs to the training graph data of a target model. Link Status Inference Attacks (LSIA) against GNNs aim to infer whether there exists a link between a pair of nodes in the graph used to train the target GNN model. Specifically, given black-box access to a GNN model, NMIAs and LSIAs are conducted by analyzing the outputs (e.g., confidence score vectors) from GNN models. The defense methods against these two score-based inference attacks face the challenges of achieving effective defense performance and maintaining the utility of GNN models. In this paper, we propose PriGraph, a defense mechanism to protect the node privacy and link privacy of training graph data, while maintaining the high accuracy of the target GNN models. PriGraph adds crafted adversarial perturbations to outputs of the target GNN model by deploying two key components, i.e., defense auxiliary classifier and adversarial perturbation generator, which are used to find the minimal adversarial perturbations that can reduce the attack accuracy while maintaining task performance of node classification. We evaluate PriGraph with different GNN models and multiple benchmark datasets. The results show that PriGraph can dramatically reduce the attack accuracy of NMIA and LSIA on GNNs, providing a superior trade-off between the model utility and privacy.
KW - Graph Neural Networks
KW - adversarial perturbations
KW - link status inference attacks
KW - node membership inference attacks
UR - https://www.scopus.com/pages/publications/105017013462
U2 - 10.1109/TDSC.2025.3612195
DO - 10.1109/TDSC.2025.3612195
M3 - Article
AN - SCOPUS:105017013462
SN - 1545-5971
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
ER -