ACGAN: Attribute controllable person image synthesis GAN for pose transfer

Shao Yue Lin, Yan Jun Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

At present, pose transfer and attribute control tasks are still the challenges for image synthesis network. At the same time, there are often artifacts in the images generated by the image synthesis network when the above two tasks are completed. The existence of artifacts causes the loss of the generated image details or introduces some wrong image information, which leads to the decline of the overall performance of the existing work. In this paper, a generative adversarial network (GAN) named ACGAN is proposed to accomplish the above two tasks and effectively eliminate artifacts in generated images. The proposed network was compared quantitatively and qualitatively with previous works on the DeepFashion dataset and better results are obtained. Moreover, the overall network has advantages over the previous works in speed and number of parameters.

Original languageEnglish
Article number103572
JournalJournal of Visual Communication and Image Representation
Volume87
DOIs
Publication statusPublished - Aug 2022

Keywords

  • Artifact
  • GAN
  • Person image synthesis

Fingerprint

Dive into the research topics of 'ACGAN: Attribute controllable person image synthesis GAN for pose transfer'. Together they form a unique fingerprint.

Cite this