Towards a physical-world adversarial patch for blinding object detection models

Yajie Wang, Haoran Lv, Xiaohui Kuang, Gang Zhao, Yu an Tan, Quanxin Zhang, Jingjing Hu*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

48 引用 (Scopus)

摘要

As one of the core components of the computer vision, the object detection model plays a vital role in various security-sensitive systems. However, it has been proved that the object detection model is vulnerable to the adversarial attack. In this paper, we propose a novel adversarial patch attack against object detection models. Our attack can make the object of a specific class invisible to object detection models. We design the detection score to measure the detection model's output and generate the adversarial patch by minimizing the detection score. We successfully suppress the model's inference and fool several state-of-the-art object detection models. We triumphantly achieve a minimum recall of 11.02% and a maximum fooling rate of 81.00% and demonstrates the high transferability of adversarial patch between different architecture and datasets. Finally, we successfully fool a real-time object detection system in the physical world, demonstrating the feasibility of transferring the digital adversarial patch to the physical world. Our work illustrates the vulnerability of the object detection model against the adversarial patch attack in both the digital and physical world.

源语言英语
页(从-至)459-471
页数13
期刊Information Sciences
556
DOI
出版状态已出版 - 5月 2021

指纹

探究 'Towards a physical-world adversarial patch for blinding object detection models' 的科研主题。它们共同构成独一无二的指纹。

引用此