TY - JOUR
T1 - Towards cross-task universal perturbation against black-box object detectors in autonomous driving
AU - Zhang, Quanxin
AU - Zhao, Yuhang
AU - Wang, Yajie
AU - Baker, Thar
AU - Zhang, Jian
AU - Hu, Jingjing
N1 - Publisher Copyright:
© 2020
PY - 2020/10/24
Y1 - 2020/10/24
N2 - Deep neural network is the main research branch in artificial intelligence and suitable for many decision-making fields. Autonomous driving and unmanned vehicle often depend on deep neural networks for accurate and reliable detection, classification, and ranging of surrounding objects in real on-road environments, either locally or by swarm intelligence among distributed nodes via 5G channel. But, it has been demonstrated that deep neural networks are vulnerable to well-designed adversarial examples that are imperceptible to human eyes in computer vision tasks. It is valuable to study the vulnerability for enhancing the robustness of neural networks. However, existing adversarial examples against object detection models are image-dependent, so in this paper, we implement adversarial attacks against object detection models using universal perturbations. We find the cross-task, cross-model, and cross-dataset transferability of universal perturbations, we train universal perturbations generator firstly and then add the universal perturbations to the target images in two ways: resizing and pile-up, in order to solve the problem that universal perturbations cannot be directly applied to attack object detection models. We use the transferability of universal perturbations to attack black-box object detection models. In this way, the time cost of generating adversarial examples is reduced. A series of experiments are conducted on PASCAL VOC and MS COCO datasets demonstrating the feasibility of cross-task attacks and proving the effectiveness of our attack on two representative object detectors: regression-based models like YOLOv3 and proposal-based models like Faster R-CNN.
AB - Deep neural network is the main research branch in artificial intelligence and suitable for many decision-making fields. Autonomous driving and unmanned vehicle often depend on deep neural networks for accurate and reliable detection, classification, and ranging of surrounding objects in real on-road environments, either locally or by swarm intelligence among distributed nodes via 5G channel. But, it has been demonstrated that deep neural networks are vulnerable to well-designed adversarial examples that are imperceptible to human eyes in computer vision tasks. It is valuable to study the vulnerability for enhancing the robustness of neural networks. However, existing adversarial examples against object detection models are image-dependent, so in this paper, we implement adversarial attacks against object detection models using universal perturbations. We find the cross-task, cross-model, and cross-dataset transferability of universal perturbations, we train universal perturbations generator firstly and then add the universal perturbations to the target images in two ways: resizing and pile-up, in order to solve the problem that universal perturbations cannot be directly applied to attack object detection models. We use the transferability of universal perturbations to attack black-box object detection models. In this way, the time cost of generating adversarial examples is reduced. A series of experiments are conducted on PASCAL VOC and MS COCO datasets demonstrating the feasibility of cross-task attacks and proving the effectiveness of our attack on two representative object detectors: regression-based models like YOLOv3 and proposal-based models like Faster R-CNN.
KW - Adversarial example
KW - Object detection
KW - Universal perturbation
UR - http://www.scopus.com/inward/record.url?scp=85088224849&partnerID=8YFLogxK
U2 - 10.1016/j.comnet.2020.107388
DO - 10.1016/j.comnet.2020.107388
M3 - Article
AN - SCOPUS:85088224849
SN - 1389-1286
VL - 180
JO - Computer Networks
JF - Computer Networks
M1 - 107388
ER -