PriGraph: Defending against Inference Attacks on Graph Neural Networks via Policy-Based Adversarial Perturbations

Research output: Contribution to journalArticlepeer-review

Abstract

Graph Neural Networks (GNNs) have been widely used in various domains, such as social networks and transportation networks. Previous research has shown that GNNs are vulnerable to inference attacks. Node Membership Inference Attacks (NMIA) on GNNs infer whether a set of graph data records belongs to the training graph data of a target model. Link Status Inference Attacks (LSIA) against GNNs aim to infer whether there exists a link between a pair of nodes in the graph used to train the target GNN model. Specifically, given black-box access to a GNN model, NMIAs and LSIAs are conducted by analyzing the outputs (e.g., confidence score vectors) from GNN models. The defense methods against these two score-based inference attacks face the challenges of achieving effective defense performance and maintaining the utility of GNN models. In this paper, we propose PriGraph, a defense mechanism to protect the node privacy and link privacy of training graph data, while maintaining the high accuracy of the target GNN models. PriGraph adds crafted adversarial perturbations to outputs of the target GNN model by deploying two key components, i.e., defense auxiliary classifier and adversarial perturbation generator, which are used to find the minimal adversarial perturbations that can reduce the attack accuracy while maintaining task performance of node classification. We evaluate PriGraph with different GNN models and multiple benchmark datasets. The results show that PriGraph can dramatically reduce the attack accuracy of NMIA and LSIA on GNNs, providing a superior trade-off between the model utility and privacy.

Original languageEnglish
JournalIEEE Transactions on Dependable and Secure Computing
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • Graph Neural Networks
  • adversarial perturbations
  • link status inference attacks
  • node membership inference attacks

Fingerprint

Dive into the research topics of 'PriGraph: Defending against Inference Attacks on Graph Neural Networks via Policy-Based Adversarial Perturbations'. Together they form a unique fingerprint.

Cite this