Multi-style image generation based on semantic image

Yue Yu*, Ding Li, Benyuan Li, Nengli Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

4 引用 (Scopus)

摘要

Image generation has always been one of the important research directions in the field of computer vision. It has rich applications in virtual reality, image design, and video synthesis. Our experiments proved that the proposed multi-style image generative network can efficiently generate high-quality images with different artistic styles based on the semantic images. Compared with the current state-of-the-art methods, the result generation speed of our proposed method is the fastest. In this paper, we focus on implementing arbitrary style transfer based on semantic images with high resolution (512×1024). We propose a new multi-channel generative adversarial network which uses fewer parameters to generate multi-style images. The network framework consists of a content feature extraction network, a style feature extraction network, and a content-stylistic feature fusion network. Our qualitative experiments show that the proposed multi-style image generation network can efficiently generate semantic-based, high-quality images with multiple artistic styles and with greater clarity and richer details. We adopt a user preference study, and the results show that the results generated by our method are more popular. Our speed study shows that our proposed method has the fastest result generation speed compared to the current state-of-the-art methods. We publicly release the source code of our project, which can be accessed at https://github.com/JuanMaoHSQ/Multi-style-image-generation-based-on-semantic-image.

源语言英语
页(从-至)3411-3426
页数16
期刊Visual Computer
40
5
DOI
出版状态已出版 - 5月 2024

指纹

探究 'Multi-style image generation based on semantic image' 的科研主题。它们共同构成独一无二的指纹。

引用此