Universal Physical Adversarial Attack via Background Image

Yidan Xu, Juan Wang, Yuanzhang Li, Yajie Wang, Zixuan Xu, Dianxin Wang*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

Recently, adversarial attacks against object detectors have become research hotspots in academia. However, digital adversarial attacks need to generate adversarial perturbation on digital images in a “pixel-wise” way, which is challenging to deploy accurately in the real world. Physical adversarial attacks usually need to paste the adversarial patches on the surface of target objects one by one, which is not suitable for objects with complex shapes and is challenging to deploy in practice. In this paper, we propose a universal background adversarial attack method for deep learning object detection, which puts the target objects on the universal background image and changes the local pixel information around the target objects so that the object detectors cannot recognize the target objects. This method takes the form of a universal background image for the physical adversarial attack and is easy to deploy in the real world. It can use a single universal background image to attack different classes of target objects simultaneously and has good robustness under different angles and distances. Extensive experiments have shown that the universal background attack can successfully attack two object detection models, YOLO v3 and Faster R-CNN, with average success rates of 74.9% and 67.8% with varying distances from 15 cm to 60 cm and angels from - 90 to 90 in the physical world.

Original languageEnglish
Title of host publicationApplied Cryptography and Network Security Workshops - ACNS 2022 Satellite Workshops, AIBlock, AIHWS, AIoTS, CIMSS, Cloud S and P, SCI, SecMT, SiMLA, Proceedings
EditorsJianying Zhou, Sudipta Chattopadhyay, Sridhar Adepu, Cristina Alcaraz, Lejla Batina, Emiliano Casalicchio, Chenglu Jin, Jingqiang Lin, Eleonora Losiouk, Suryadipta Majumdar, Weizhi Meng, Stjepan Picek, Yury Zhauniarovich, Jun Shao, Chunhua Su, Cong Wang, Saman Zonouz
PublisherSpringer Science and Business Media Deutschland GmbH
Pages3-14
Number of pages12
ISBN (Print)9783031168147
DOIs
Publication statusPublished - 2022
EventSatellite Workshops on AIBlock, AIHWS, AIoTS, CIMSS, Cloud S and P, SCI, SecMT, SiMLA 2022, held in conjunction with the 20th International Conference on Applied Cryptography and Network Security, ACNS 2022 - Virtual, Online
Duration: 20 Jun 202223 Jun 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13285 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceSatellite Workshops on AIBlock, AIHWS, AIoTS, CIMSS, Cloud S and P, SCI, SecMT, SiMLA 2022, held in conjunction with the 20th International Conference on Applied Cryptography and Network Security, ACNS 2022
CityVirtual, Online
Period20/06/2223/06/22

Keywords

  • Adversarial examples
  • Object detection
  • Physical adversarial attack

Fingerprint

Dive into the research topics of 'Universal Physical Adversarial Attack via Background Image'. Together they form a unique fingerprint.

Cite this