Self-Supervised Learning with Consistency Loss for Improving GANs

Jie Gao, Dandan Song*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

After much research and advancements, GANs have achieved great success but still face many challenges. In this paper, we adopt self-supervised learning based on rotation angles to overcome the catastrophic forgetting of the discriminator. Self-supervision encourages the discriminator to learn meaningful feature representations that are not forgotten during training. Meanwhile, this paper adopts consistent adversarial training to alleviate the mode collapse of the generator. The consistency constraint condition encourages the discriminator to explore more features, which helps the generator achieve more significant improvement space. This deep generative model improves unsupervised image generation tasks by simultaneously alleviating two critical issues in GANs. Experimental results demonstrate that our model achieves competitive scores.

Original languageEnglish
Title of host publicationInternational Conference on Mechanisms and Robotics, ICMAR 2022
EditorsZeguang Pei
PublisherSPIE
ISBN (Electronic)9781510657328
DOIs
Publication statusPublished - 2022
Event2022 International Conference on Mechanisms and Robotics, ICMAR 2022 - Zhuhai, China
Duration: 25 Feb 202227 Feb 2022

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume12331
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference2022 International Conference on Mechanisms and Robotics, ICMAR 2022
Country/TerritoryChina
CityZhuhai
Period25/02/2227/02/22

Keywords

  • Consistent adversarial
  • Deep Learning
  • Generative Adversarial Networks
  • Self-supervision

Fingerprint

Dive into the research topics of 'Self-Supervised Learning with Consistency Loss for Improving GANs'. Together they form a unique fingerprint.

Cite this