Abstract
Image generation has always been one of the important research directions in the field of computer vision. It has rich applications in virtual reality, image design, and video synthesis. Our experiments proved that the proposed multi-style image generative network can efficiently generate high-quality images with different artistic styles based on the semantic images. Compared with the current state-of-the-art methods, the result generation speed of our proposed method is the fastest. In this paper, we focus on implementing arbitrary style transfer based on semantic images with high resolution (512×1024). We propose a new multi-channel generative adversarial network which uses fewer parameters to generate multi-style images. The network framework consists of a content feature extraction network, a style feature extraction network, and a content-stylistic feature fusion network. Our qualitative experiments show that the proposed multi-style image generation network can efficiently generate semantic-based, high-quality images with multiple artistic styles and with greater clarity and richer details. We adopt a user preference study, and the results show that the results generated by our method are more popular. Our speed study shows that our proposed method has the fastest result generation speed compared to the current state-of-the-art methods. We publicly release the source code of our project, which can be accessed at https://github.com/JuanMaoHSQ/Multi-style-image-generation-based-on-semantic-image.
Original language | English |
---|---|
Pages (from-to) | 3411-3426 |
Number of pages | 16 |
Journal | Visual Computer |
Volume | 40 |
Issue number | 5 |
DOIs | |
Publication status | Published - May 2024 |
Keywords
- Generative adversarial network
- Image generation
- Style transfer