TY - JOUR
T1 - Generating adversarial examples via enhancing latent spatial features of benign traffic and preserving malicious functions
AU - Zhang, Rongqian
AU - Luo, Senlin
AU - Pan, Limin
AU - Hao, Jingwei
AU - Zhang, Ji
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2022/6/14
Y1 - 2022/6/14
N2 - Well-crafted adversarial examples can easily deceive neural network models into producing misclassified results while contributing to evaluating and improving the performance and robustness of the classification model. However, most adversarial examples generation methods still have the following drawbacks: (1) the original samples ignore the distribution regularity of benign samples and directly add noise, so the generated adversarial examples have significant differences in latent spatial distribution with benign samples, which makes them difficult to escape detection; (2) the discriminant features of the adversarial examples are directly modified, which causes their malicious patterns to change or malicious functions to be unattainable. In this paper, a novel malicious traffic adversarial examples generation method, NIDSFM, is proposed. Through NIDSFM, the feature space of the traffic samples is reconstructed to avoid interference with the malicious functions of the generated adversarial examples by isolating the discriminant features. By using the ability of the flow-based model to represent the latent space distribution, the distribution of adversarial examples is modeled around the benign samples, then fine-tuned based on generative adversarial networks (GAN) with additional latent spatial noise so that the distribution of generated adversarial examples is similar to benign samples. Extensive experiments were conducted on multiple datasets (NSL-KDD, UNSW-NB15, CIC-DDoS2019) and compared with various adversarial examples generation methods. The experimental results show that the proposed method leads to a significant reduction in the detection rate of multiple NIDSs and is competitive in escaping NIDS detection.
AB - Well-crafted adversarial examples can easily deceive neural network models into producing misclassified results while contributing to evaluating and improving the performance and robustness of the classification model. However, most adversarial examples generation methods still have the following drawbacks: (1) the original samples ignore the distribution regularity of benign samples and directly add noise, so the generated adversarial examples have significant differences in latent spatial distribution with benign samples, which makes them difficult to escape detection; (2) the discriminant features of the adversarial examples are directly modified, which causes their malicious patterns to change or malicious functions to be unattainable. In this paper, a novel malicious traffic adversarial examples generation method, NIDSFM, is proposed. Through NIDSFM, the feature space of the traffic samples is reconstructed to avoid interference with the malicious functions of the generated adversarial examples by isolating the discriminant features. By using the ability of the flow-based model to represent the latent space distribution, the distribution of adversarial examples is modeled around the benign samples, then fine-tuned based on generative adversarial networks (GAN) with additional latent spatial noise so that the distribution of generated adversarial examples is similar to benign samples. Extensive experiments were conducted on multiple datasets (NSL-KDD, UNSW-NB15, CIC-DDoS2019) and compared with various adversarial examples generation methods. The experimental results show that the proposed method leads to a significant reduction in the detection rate of multiple NIDSs and is competitive in escaping NIDS detection.
KW - Adversarial attack
KW - Flow-based model
KW - Generate adversarial examples
KW - Generative adversarial networks
KW - Intrusion detection systems
UR - http://www.scopus.com/inward/record.url?scp=85122947559&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2021.12.015
DO - 10.1016/j.neucom.2021.12.015
M3 - Article
AN - SCOPUS:85122947559
SN - 0925-2312
VL - 490
SP - 413
EP - 430
JO - Neurocomputing
JF - Neurocomputing
ER -