Abstract
Object detection models play an essential role in various IoT devices as one of the core components. Scientific experiments have proven that object detection models are vulnerable to adversarial examples. Heretofore, some attack methods against object detection models have been proposed, but the existing attack methods can only attack white-box models or a specific type of black-box models. In this paper, we propose a novel black-box attack method called Evaporate Attack, which can successfully attack both regression-based and region-based detection models. To perform an effective attack on different types of object detection models, we design an optimization algorithm, which can generate adversarial examples only utilizes the position and label information of the model's prediction. Evaporate Attack can hide objects from detection models without any interior information of the model. This scenario is much practical in real-world faced by the attacker. Our approach achieves an 84% fooling rate on regression-based YOLOv3 and a 48% fooling rate on region-based Faster R–CNN, under the premise that all objects are hidden.
Original language | English |
---|---|
Article number | 102634 |
Journal | Journal of Network and Computer Applications |
Volume | 161 |
DOIs | |
Publication status | Published - 1 Jul 2020 |
Keywords
- Adversarial example
- Black-box attack
- Deep neural network
- Object detector