ACGAN: Attribute controllable person image synthesis GAN for pose transfer

Shao Yue Lin, Yan Jun Zhang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)
Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 2
  • Captures
    • Readers: 2
see details

摘要

At present, pose transfer and attribute control tasks are still the challenges for image synthesis network. At the same time, there are often artifacts in the images generated by the image synthesis network when the above two tasks are completed. The existence of artifacts causes the loss of the generated image details or introduces some wrong image information, which leads to the decline of the overall performance of the existing work. In this paper, a generative adversarial network (GAN) named ACGAN is proposed to accomplish the above two tasks and effectively eliminate artifacts in generated images. The proposed network was compared quantitatively and qualitatively with previous works on the DeepFashion dataset and better results are obtained. Moreover, the overall network has advantages over the previous works in speed and number of parameters.

源语言英语
文章编号103572
期刊Journal of Visual Communication and Image Representation
87
DOI
出版状态已出版 - 8月 2022

指纹

探究 'ACGAN: Attribute controllable person image synthesis GAN for pose transfer' 的科研主题。它们共同构成独一无二的指纹。

引用此

Lin, S. Y., & Zhang, Y. J. (2022). ACGAN: Attribute controllable person image synthesis GAN for pose transfer. Journal of Visual Communication and Image Representation, 87, 文章 103572. https://doi.org/10.1016/j.jvcir.2022.103572