Improving the invisibility of adversarial examples with perceptually adaptive perturbation

Yaoyuan Zhang, Yu an Tan, Haipeng Sun, Yuhang Zhao, Quanxing Zhang, Yuanzhang Li*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples generated by adding subtle perturbations to benign inputs. While these perturbations are somewhat small due to the Lp norm constraint, they are still easily spotted by human eyes. This paper proposes Perceptual Sensitive Attack (PS Attack) to address this flaw with a perceptually adaptive scheme. We add Just Noticeable Difference (JND) as prior information into adversarial attacks, making image changes in areas that are insensitive to the human eyes. By integrating the JND matrix into the Lp norm, PS Attack projects perturbations onto the JND space around clean data, resulting in more imperceivable adversarial perturbations. PS Attack also mitigates the trade-off between the imperceptibility and transferability of adversarial images by adjusting a visual coefficient. Extensive experiments manifest that combining PS attacks with state-of-the-art black-box approaches can significantly promote the naturalness of adversarial examples while maintaining their attack ability. Compared to the state-of-the-art transferable attacks, our attacks reduce LPIPS by 8% on average when attacking typically-trained and defense models.

Original languageEnglish
Pages (from-to)126-137
Number of pages12
JournalInformation Sciences
Volume635
DOIs
Publication statusPublished - Jul 2023

Keywords

  • Adversarial examples
  • Deep neural networks
  • Image classification
  • Just noticeable difference
  • Perceptually adaptive

Fingerprint

Dive into the research topics of 'Improving the invisibility of adversarial examples with perceptually adaptive perturbation'. Together they form a unique fingerprint.

Cite this