Securing Data Privacy in NIDS: Black-Box Adversarial Attacks

  • Dawei Xu
  • , Yunfang Liang
  • , Yunfan Yang
  • , Yajie Wang
  • , Baokun Zheng*
  • , Chuan Zhang*
  • , Liehuang Zhu
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing importance of privacy and data security in network communications, network intrusion detection systems (NIDSs) play a vital role in safeguarding against unauthorized access and data breaches. NIDSs utilize machine learning or deep learning models to distinguish between normal and malicious traffic, taking preventive actions when suspicious activities are identified. However, the vulnerability of these models to adversarial attacks poses a significant threat to data privacy and security. Attackers can exploit adversarial attacks to evade NIDS detection, potentially leading to the compromise of sensitive information. Existing research on adversarial attacks primarily focuses on white-box scenarios, which assume attackers have complete knowledge of the target model. This assumption is unrealistic in real-world scenarios. Moreover, adversarial examples generated through random perturbations or unconstrained methods are often easily detectable by classifiers, and they may not retain the full attack capabilities. To address these issues, this article explores a black-box adversarial attack approach, using alternative model algorithms to obtain the output of the target model without requiring detailed model information and utilizing adversarial sample generation method (A-M) with realistic constraints for adversarial attacks, which is more aligned with real-world data privacy and security issues. When evaluating the method proposed in this article, deep neural network (DNN) was used as the basic model and compared with various models in experiments. Comparing the generated adversarial examples with the original NSL-KDD dataset and KDD-CUP 99 dataset, the accuracy decreased to around 50% in binary and multiclassification scenarios, demonstrating the effectiveness of this method.

Original languageEnglish
Article number1500333
JournalInternational Journal of Intelligent Systems
Volume2025
Issue number1
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • adversarial robustness
  • black-box adversarial attacks
  • constrained adversarial examples
  • data privacy
  • network intrusion detection system

Fingerprint

Dive into the research topics of 'Securing Data Privacy in NIDS: Black-Box Adversarial Attacks'. Together they form a unique fingerprint.

Cite this