摘要
Object detection models play an essential role in various IoT devices as one of the core components. Scientific experiments have proven that object detection models are vulnerable to adversarial examples. Heretofore, some attack methods against object detection models have been proposed, but the existing attack methods can only attack white-box models or a specific type of black-box models. In this paper, we propose a novel black-box attack method called Evaporate Attack, which can successfully attack both regression-based and region-based detection models. To perform an effective attack on different types of object detection models, we design an optimization algorithm, which can generate adversarial examples only utilizes the position and label information of the model's prediction. Evaporate Attack can hide objects from detection models without any interior information of the model. This scenario is much practical in real-world faced by the attacker. Our approach achieves an 84% fooling rate on regression-based YOLOv3 and a 48% fooling rate on region-based Faster R–CNN, under the premise that all objects are hidden.
源语言 | 英语 |
---|---|
文章编号 | 102634 |
期刊 | Journal of Network and Computer Applications |
卷 | 161 |
DOI | |
出版状态 | 已出版 - 1 7月 2020 |