TY - GEN
T1 - Adversarial Examples for Preventing Diffusion Models from Malicious Image Edition
AU - Guo, Mengjie
AU - Gai, Keke
AU - Yu, Jing
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - In recent years, with the advancement of artificial intelligence technology, Diffusion Models have become a prominent research direction, exhibiting remarkable proficiency in image generation tasks. However, the unrestricted utilization of Diffusion Models by infringers to illicitly edit unauthorized images has given rise to novel copyright challenges and privacy apprehensions. To address these issues, this paper introduces an adversarial sample-based approach that can significantly mitigate malicious image modifications by Diffusion Models. The key idea is to add imperceptible adversarial perturbations on the image, so that the representation of the disturbed image in the latent space is far away from the original image, thus effectively disrupting the editing operations of Diffusion Models and generating unrealistic pictures. A substantial volume of experimental results demonstrate the efficacy and robustness of this method.
AB - In recent years, with the advancement of artificial intelligence technology, Diffusion Models have become a prominent research direction, exhibiting remarkable proficiency in image generation tasks. However, the unrestricted utilization of Diffusion Models by infringers to illicitly edit unauthorized images has given rise to novel copyright challenges and privacy apprehensions. To address these issues, this paper introduces an adversarial sample-based approach that can significantly mitigate malicious image modifications by Diffusion Models. The key idea is to add imperceptible adversarial perturbations on the image, so that the representation of the disturbed image in the latent space is far away from the original image, thus effectively disrupting the editing operations of Diffusion Models and generating unrealistic pictures. A substantial volume of experimental results demonstrate the efficacy and robustness of this method.
KW - Adversarial Examples
KW - Adversarial Perturbations
KW - Diffusion Models
KW - Latent Distribution
KW - Latent Space
UR - https://www.scopus.com/pages/publications/85200741191
U2 - 10.1007/978-981-97-5498-4_29
DO - 10.1007/978-981-97-5498-4_29
M3 - Conference contribution
AN - SCOPUS:85200741191
SN - 9789819754977
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 373
EP - 385
BT - Knowledge Science, Engineering and Management - 17th International Conference, KSEM 2024, Proceedings
A2 - Cao, Cungeng
A2 - Chen, Huajun
A2 - Zhao, Liang
A2 - Arshad, Junaid
A2 - Wang, Yonghao
A2 - Asyhari, Taufiq
PB - Springer Science and Business Media Deutschland GmbH
T2 - 17th International Conference on Knowledge Science, Engineering and Management, KSEM 2024
Y2 - 16 August 2024 through 18 August 2024
ER -