TY - JOUR
T1 - A multitarget backdooring attack on deep neural networks with random location trigger
AU - Xiao, Yu
AU - Cong, Liu
AU - Mingwen, Zheng
AU - Yajie, Wang
AU - Xinrui, Liu
AU - Shuxiao, Song
AU - Yuexuan, Ma
AU - Jun, Zheng
N1 - Publisher Copyright:
© 2021 Wiley Periodicals LLC.
PY - 2022/3
Y1 - 2022/3
N2 - Machine learning has made tremendous progress and applied to various critical practical applications. However, recent studies have shown that machine learning models are vulnerable to malicious attackers, such as neural network backdoor triggering. A successful backdoor triggering behavior may cause serious consequences, such as allowing the attacker to bypass the identity verification and directly enter the system. In image classification, there is always only one target label triggered by one backdoor trigger in previous works. The position of the backdoor trigger is also fixed, which brings limitations to the attack. In this paper, we propose a novel method that utilizes one trigger pattern to correspond to multiple target labels, and the location of the trigger is not limited. In our method, the trigger guarantees that the malicious output is within the range of multiple targets chosen by the attacker, but the specific target depends on the original image where the trigger is pasted. Due to the original images' diversity, it is difficult for the defender to predict which target the image with the trigger is classified as. Besides, the attacker can use only one trigger pattern to achieve multitarget attacks at different locations, which brings more flexibility. We also proposed to train a neural network as a detector to distinguish backdoor images and clean images for multitarget backdooring attacks. Experiment results show that the detection method can also successfully detect the backdoor image with a trigger at a random location of the image, and the detection success rate is as high as 86.02%.
AB - Machine learning has made tremendous progress and applied to various critical practical applications. However, recent studies have shown that machine learning models are vulnerable to malicious attackers, such as neural network backdoor triggering. A successful backdoor triggering behavior may cause serious consequences, such as allowing the attacker to bypass the identity verification and directly enter the system. In image classification, there is always only one target label triggered by one backdoor trigger in previous works. The position of the backdoor trigger is also fixed, which brings limitations to the attack. In this paper, we propose a novel method that utilizes one trigger pattern to correspond to multiple target labels, and the location of the trigger is not limited. In our method, the trigger guarantees that the malicious output is within the range of multiple targets chosen by the attacker, but the specific target depends on the original image where the trigger is pasted. Due to the original images' diversity, it is difficult for the defender to predict which target the image with the trigger is classified as. Besides, the attacker can use only one trigger pattern to achieve multitarget attacks at different locations, which brings more flexibility. We also proposed to train a neural network as a detector to distinguish backdoor images and clean images for multitarget backdooring attacks. Experiment results show that the detection method can also successfully detect the backdoor image with a trigger at a random location of the image, and the detection success rate is as high as 86.02%.
UR - http://www.scopus.com/inward/record.url?scp=85122077333&partnerID=8YFLogxK
U2 - 10.1002/int.22785
DO - 10.1002/int.22785
M3 - Article
AN - SCOPUS:85122077333
SN - 0884-8173
VL - 37
SP - 2567
EP - 2583
JO - International Journal of Intelligent Systems
JF - International Journal of Intelligent Systems
IS - 3
ER -