TY - GEN
T1 - Semi-supervised Learning with Conditional GANs for Blind Generated Image Quality Assessment
AU - Zhang, Xuewen
AU - Zhang, Yunye
AU - Yu, Wenxin
AU - Nie, Liang
AU - Zhang, Zhiqiang
AU - Chen, Shiyu
AU - Gong, Jun
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Evaluating the quality of images generated by generative adversarial networks (GANs) is still an open problem. Metrics such as Inception Score(IS) and Fréchet Inception Distance (FID) are limited in evaluating a single image, making trouble for researchers’ results presentation and practical application. In this context, an end-to-end image quality assessment (IQA) neural network shows excellent promise for a single generated image quality evaluation. However, generated image datasets with quality labels are too rare to train an efficient model. To handle this problem, this paper proposes a semi-supervised learning strategy to evaluate the quality of a single generated image. Firstly, a conditional GAN (CGAN) is employed to produce large numbers of generated-image samples, while the input conditions are regarded as the quality label. Secondly, these samples are fed into an image quality regression neural network to train a raw quality assessment model. Finally, a small number of labeled samples are used to fine-tune the model. In the experiments, this paper utilizes FID to prove our method’s efficiency indirectly. The value of FID decreased by 3.32 on average after we removed 40% of low-quality images. It shows that our method can not only reasonably evaluate the result of the overall generated image but also accurately evaluate the single generated image.
AB - Evaluating the quality of images generated by generative adversarial networks (GANs) is still an open problem. Metrics such as Inception Score(IS) and Fréchet Inception Distance (FID) are limited in evaluating a single image, making trouble for researchers’ results presentation and practical application. In this context, an end-to-end image quality assessment (IQA) neural network shows excellent promise for a single generated image quality evaluation. However, generated image datasets with quality labels are too rare to train an efficient model. To handle this problem, this paper proposes a semi-supervised learning strategy to evaluate the quality of a single generated image. Firstly, a conditional GAN (CGAN) is employed to produce large numbers of generated-image samples, while the input conditions are regarded as the quality label. Secondly, these samples are fed into an image quality regression neural network to train a raw quality assessment model. Finally, a small number of labeled samples are used to fine-tune the model. In the experiments, this paper utilizes FID to prove our method’s efficiency indirectly. The value of FID decreased by 3.32 on average after we removed 40% of low-quality images. It shows that our method can not only reasonably evaluate the result of the overall generated image but also accurately evaluate the single generated image.
KW - Generated image
KW - Generative adversarial networks
KW - Image quality assessment
UR - http://www.scopus.com/inward/record.url?scp=85121910310&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-92238-2_40
DO - 10.1007/978-3-030-92238-2_40
M3 - Conference contribution
AN - SCOPUS:85121910310
SN - 9783030922375
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 482
EP - 493
BT - Neural Information Processing - 28th International Conference, ICONIP 2021, Proceedings
A2 - Mantoro, Teddy
A2 - Lee, Minho
A2 - Ayu, Media Anugerah
A2 - Wong, Kok Wai
A2 - Hidayanto, Achmad Nizar
PB - Springer Science and Business Media Deutschland GmbH
T2 - 28th International Conference on Neural Information Processing, ICONIP 2021
Y2 - 8 December 2021 through 12 December 2021
ER -