Modifying the one-hot encoding technique can enhance the adversarial robustness of the visual model for symbol recognition

Yi Sun, Jun Zheng, Hanyu Zhao, Huipeng Zhou, Jiaxing Li, Fan Li, Zehui Xiong, Jun Liu, Yuanzhang Li*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

3 引用 (Scopus)

摘要

Deep learning systems, particularly those used in image classification, are threatened by Adversarial Examples. In contrast, Adversarial Examples do not affect the mammalian visual system. We undertake a comparative analysis of the traditional image multi-classification models and human cognitive frameworks, namely ACT-R and QN-MHP, and find that the One-hot encoded output structure lacks anatomical support. Furthermore, the CLIP model, which uses natural language supervision, closely resembles the human visual cognition process. There is structural information between category labels, and various label errors are not equivalent in most cases. One-hot encoding disregards these distinctions, exacerbating Adversarial Examples’ detrimental impact. We introduce a new direction for the adversarial defense that replaces One-hot encoding with natural language encoding or other encodings that preserve structural information between labels. Experiments and Lipschitz continuity analysis show that this approach can enhance the robustness of the model against Adversarial Samples, especially in scenarios such as visual symbol recognition.

源语言英语
文章编号123751
期刊Expert Systems with Applications
250
DOI
出版状态已出版 - 15 9月 2024

指纹

探究 'Modifying the one-hot encoding technique can enhance the adversarial robustness of the visual model for symbol recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此