跳到主要导航 跳到搜索 跳到主要内容

Adversarial Examples for Preventing Diffusion Models from Malicious Image Edition

  • Mengjie Guo
  • , Keke Gai*
  • , Jing Yu*
  • *此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

In recent years, with the advancement of artificial intelligence technology, Diffusion Models have become a prominent research direction, exhibiting remarkable proficiency in image generation tasks. However, the unrestricted utilization of Diffusion Models by infringers to illicitly edit unauthorized images has given rise to novel copyright challenges and privacy apprehensions. To address these issues, this paper introduces an adversarial sample-based approach that can significantly mitigate malicious image modifications by Diffusion Models. The key idea is to add imperceptible adversarial perturbations on the image, so that the representation of the disturbed image in the latent space is far away from the original image, thus effectively disrupting the editing operations of Diffusion Models and generating unrealistic pictures. A substantial volume of experimental results demonstrate the efficacy and robustness of this method.

源语言英语
主期刊名Knowledge Science, Engineering and Management - 17th International Conference, KSEM 2024, Proceedings
编辑Cungeng Cao, Huajun Chen, Liang Zhao, Junaid Arshad, Yonghao Wang, Taufiq Asyhari
出版商Springer Science and Business Media Deutschland GmbH
373-385
页数13
ISBN(印刷版)9789819754977
DOI
出版状态已出版 - 2024
活动17th International Conference on Knowledge Science, Engineering and Management, KSEM 2024 - Birmingham, 英国
期限: 16 8月 202418 8月 2024

出版系列

姓名Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
14886 LNAI
ISSN(印刷版)0302-9743
ISSN(电子版)1611-3349

会议

会议17th International Conference on Knowledge Science, Engineering and Management, KSEM 2024
国家/地区英国
Birmingham
时期16/08/2418/08/24

指纹

探究 'Adversarial Examples for Preventing Diffusion Models from Malicious Image Edition' 的科研主题。它们共同构成独一无二的指纹。

引用此