摘要
As one of the core components of the computer vision, the object detection model plays a vital role in various security-sensitive systems. However, it has been proved that the object detection model is vulnerable to the adversarial attack. In this paper, we propose a novel adversarial patch attack against object detection models. Our attack can make the object of a specific class invisible to object detection models. We design the detection score to measure the detection model's output and generate the adversarial patch by minimizing the detection score. We successfully suppress the model's inference and fool several state-of-the-art object detection models. We triumphantly achieve a minimum recall of 11.02% and a maximum fooling rate of 81.00% and demonstrates the high transferability of adversarial patch between different architecture and datasets. Finally, we successfully fool a real-time object detection system in the physical world, demonstrating the feasibility of transferring the digital adversarial patch to the physical world. Our work illustrates the vulnerability of the object detection model against the adversarial patch attack in both the digital and physical world.
源语言 | 英语 |
---|---|
页(从-至) | 459-471 |
页数 | 13 |
期刊 | Information Sciences |
卷 | 556 |
DOI | |
出版状态 | 已出版 - 5月 2021 |