TY - JOUR
T1 - Backdoor Attacks on Image Classification Models in Deep Neural Networks
AU - Zhang, Quanxin
AU - Wencong, M. A.
AU - Wang, Yajie
AU - Zhang, Yaoyuan
AU - Shi, Zhiwei
AU - Yuanzhang, L. I.
N1 - Publisher Copyright:
© 2022 Chinese Institute of Electronics
PY - 2022/3
Y1 - 2022/3
N2 - Deep neural network (DNN) is applied widely in many applications and achieves state-of-the-art performance. However, DNN lacks transparency and interpretability for users in structure. Attackers can use this feature to embed trojan horses in the DNN structure, such as inserting a backdoor into the DNN, so that DNN can learn both the normal main task and additional malicious tasks at the same time. Besides, DNN relies on data set for training. Attackers can tamper with training data to interfere with DNN training process, such as attaching a trigger on input data. Because of defects in DNN structure and data, the backdoor attack can be a serious threat to the security of DNN. The DNN attacked by backdoor performs well on benign inputs while it outputs an attacker-specified label on trigger attached inputs. Backdoor attack can be conducted in almost every stage of the machine learning pipeline. Although there are a few researches in the backdoor attack on image classification, a systematic review is still rare in this field. This paper is a comprehensive review of backdoor attacks. According to whether attackers have access to the training data, we divide various backdoor attacks into two types: poisoning-based attacks and non-poisoning-based attacks. We go through the details of each work in the timeline, discussing its contribution and deficiencies. We propose a detailed mathematical backdoor model to summary all kinds of backdoor attacks. In the end, we provide some insights about future studies.
AB - Deep neural network (DNN) is applied widely in many applications and achieves state-of-the-art performance. However, DNN lacks transparency and interpretability for users in structure. Attackers can use this feature to embed trojan horses in the DNN structure, such as inserting a backdoor into the DNN, so that DNN can learn both the normal main task and additional malicious tasks at the same time. Besides, DNN relies on data set for training. Attackers can tamper with training data to interfere with DNN training process, such as attaching a trigger on input data. Because of defects in DNN structure and data, the backdoor attack can be a serious threat to the security of DNN. The DNN attacked by backdoor performs well on benign inputs while it outputs an attacker-specified label on trigger attached inputs. Backdoor attack can be conducted in almost every stage of the machine learning pipeline. Although there are a few researches in the backdoor attack on image classification, a systematic review is still rare in this field. This paper is a comprehensive review of backdoor attacks. According to whether attackers have access to the training data, we divide various backdoor attacks into two types: poisoning-based attacks and non-poisoning-based attacks. We go through the details of each work in the timeline, discussing its contribution and deficiencies. We propose a detailed mathematical backdoor model to summary all kinds of backdoor attacks. In the end, we provide some insights about future studies.
KW - Backdoor attack
KW - Non-poisoning-based attacks
KW - Poisoning-based attacks
KW - Review
KW - Security
UR - http://www.scopus.com/inward/record.url?scp=85126050609&partnerID=8YFLogxK
U2 - 10.1049/cje.2021.00.126
DO - 10.1049/cje.2021.00.126
M3 - Review article
AN - SCOPUS:85126050609
SN - 1022-4653
VL - 31
SP - 199
EP - 212
JO - Chinese Journal of Electronics
JF - Chinese Journal of Electronics
IS - 2
ER -