ATTACK-COSM: attacking the camouflaged object segmentation model through digital world adversarial examples

Qiaoyi Li, Zhengjie Wang*, Xiaoning Zhang, Yang Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

The camouflaged object segmentation model (COSM) has recently gained substantial attention due to its remarkable ability to detect camouflaged objects. Nevertheless, deep vision models are widely acknowledged to be susceptible to adversarial examples, which can mislead models, causing them to make incorrect predictions through imperceptible perturbations. The vulnerability to adversarial attacks raises significant concerns when deploying COSM in security-sensitive applications. Consequently, it is crucial to determine whether the foundational vision model COSM is also susceptible to such attacks. To our knowledge, our work represents the first exploration of strategies for targeting COSM with adversarial examples in the digital world. With the primary objective of reversing the predictions for both masked objects and backgrounds, we explore the adversarial robustness of COSM in full white-box and black-box settings. In addition to the primary objective of reversing the predictions for masked objects and backgrounds, our investigation reveals the potential to generate any desired mask through adversarial attacks. The experimental results indicate that COSM demonstrates weak robustness, rendering it vulnerable to adversarial example attacks. In the realm of COS, the projected gradient descent (PGD) attack method exhibits superior attack capabilities compared to the fast gradient sign (FGSM) method in both white-box and black-box settings. These findings reduce the security risks in the application of COSM and pave the way for multiple applications of COSM.

源语言英语
期刊Complex and Intelligent Systems
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'ATTACK-COSM: attacking the camouflaged object segmentation model through digital world adversarial examples' 的科研主题。它们共同构成独一无二的指纹。

引用此