Adversarial Examples for Preventing Diffusion Models from Malicious Image Edition

  • Mengjie Guo
  • , Keke Gai*
  • , Jing Yu*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In recent years, with the advancement of artificial intelligence technology, Diffusion Models have become a prominent research direction, exhibiting remarkable proficiency in image generation tasks. However, the unrestricted utilization of Diffusion Models by infringers to illicitly edit unauthorized images has given rise to novel copyright challenges and privacy apprehensions. To address these issues, this paper introduces an adversarial sample-based approach that can significantly mitigate malicious image modifications by Diffusion Models. The key idea is to add imperceptible adversarial perturbations on the image, so that the representation of the disturbed image in the latent space is far away from the original image, thus effectively disrupting the editing operations of Diffusion Models and generating unrealistic pictures. A substantial volume of experimental results demonstrate the efficacy and robustness of this method.

Original languageEnglish
Title of host publicationKnowledge Science, Engineering and Management - 17th International Conference, KSEM 2024, Proceedings
EditorsCungeng Cao, Huajun Chen, Liang Zhao, Junaid Arshad, Yonghao Wang, Taufiq Asyhari
PublisherSpringer Science and Business Media Deutschland GmbH
Pages373-385
Number of pages13
ISBN (Print)9789819754977
DOIs
Publication statusPublished - 2024
Event17th International Conference on Knowledge Science, Engineering and Management, KSEM 2024 - Birmingham, United Kingdom
Duration: 16 Aug 202418 Aug 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14886 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th International Conference on Knowledge Science, Engineering and Management, KSEM 2024
Country/TerritoryUnited Kingdom
CityBirmingham
Period16/08/2418/08/24

Keywords

  • Adversarial Examples
  • Adversarial Perturbations
  • Diffusion Models
  • Latent Distribution
  • Latent Space

Fingerprint

Dive into the research topics of 'Adversarial Examples for Preventing Diffusion Models from Malicious Image Edition'. Together they form a unique fingerprint.

Cite this