Multi-layer feature fusion based image style transfer with arbitrary text condition

Yue Yu*, Jingshuo Xing, Nengli Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Style transfer refers to the conversion of images in two different domains. Compared with the style transfer based on the style image, the image style transfer through the text description is more free and applicable to more practical scenarios. However, the image style transfer method under the text condition needs to be trained and optimized for different text and image inputs each time, resulting in limited style transfer efficiency. Therefore, this paper proposes a multi-layer feature fusion based style transfer method (MlFFST) with arbitrary text condition. To address the problems of distortion and missing semantic content, we also introduce a multi-layer attention normalization module. The experimental results show that the method in this paper can generate stylized results with high quality, good effect and high stability for images and videos. And this method can meet real-time requirements to generate more artistic and aesthetic images and videos.

源语言英语
文章编号117243
期刊Signal Processing: Image Communication
132
DOI
出版状态已出版 - 3月 2025

指纹

探究 'Multi-layer feature fusion based image style transfer with arbitrary text condition' 的科研主题。它们共同构成独一无二的指纹。

引用此