Improving the invisibility of adversarial examples with perceptually adaptive perturbation

Yaoyuan Zhang, Yu an Tan, Haipeng Sun, Yuhang Zhao, Quanxing Zhang, Yuanzhang Li*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

11 引用 (Scopus)

摘要

Deep neural networks (DNNs) are vulnerable to adversarial examples generated by adding subtle perturbations to benign inputs. While these perturbations are somewhat small due to the Lp norm constraint, they are still easily spotted by human eyes. This paper proposes Perceptual Sensitive Attack (PS Attack) to address this flaw with a perceptually adaptive scheme. We add Just Noticeable Difference (JND) as prior information into adversarial attacks, making image changes in areas that are insensitive to the human eyes. By integrating the JND matrix into the Lp norm, PS Attack projects perturbations onto the JND space around clean data, resulting in more imperceivable adversarial perturbations. PS Attack also mitigates the trade-off between the imperceptibility and transferability of adversarial images by adjusting a visual coefficient. Extensive experiments manifest that combining PS attacks with state-of-the-art black-box approaches can significantly promote the naturalness of adversarial examples while maintaining their attack ability. Compared to the state-of-the-art transferable attacks, our attacks reduce LPIPS by 8% on average when attacking typically-trained and defense models.

源语言英语
页(从-至)126-137
页数12
期刊Information Sciences
635
DOI
出版状态已出版 - 7月 2023

指纹

探究 'Improving the invisibility of adversarial examples with perceptually adaptive perturbation' 的科研主题。它们共同构成独一无二的指纹。

引用此