TY - GEN
T1 - FFGAN
T2 - 5th Asia-Pacific Conference on Image Processing, Electronics and Computers, IPEC 2024
AU - Hua, Runzhou
AU - Zhang, Ji
AU - Xue, Jingfeng
AU - Wang, Yong
AU - Liu, Zhenyan
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - With the rapid development and wide application of deep learning in large-scale data training models, the problem of insufficient data has become a constraint on the performance and applicability of deep learning models. In response to this issue, researchers have proposed methods based on feature fusion. However, existing feature fusion methods have certain limitations in terms of generating diverse and accurate images. To further improve the effectiveness of image classification tasks, this paper proposes a novel method for few-shot image generation based on local feature fusion. This method combines the concepts of Feature Fusion and Generative Adversarial Networks (FFGAN) to improve the quality and diversity of generated images. It addresses issues such as spatial misalignment in generated images. Additionally, this paper introduces a local reconstruction loss to optimize the local feature fusion module. The local reconstruction loss improves the quality of few-shot image generation by enforcing the generated images to closely resemble the corresponding local positions of input images in certain local regions. Finally, extensive experiments are conducted in image generation, image classification and image visualization.
AB - With the rapid development and wide application of deep learning in large-scale data training models, the problem of insufficient data has become a constraint on the performance and applicability of deep learning models. In response to this issue, researchers have proposed methods based on feature fusion. However, existing feature fusion methods have certain limitations in terms of generating diverse and accurate images. To further improve the effectiveness of image classification tasks, this paper proposes a novel method for few-shot image generation based on local feature fusion. This method combines the concepts of Feature Fusion and Generative Adversarial Networks (FFGAN) to improve the quality and diversity of generated images. It addresses issues such as spatial misalignment in generated images. Additionally, this paper introduces a local reconstruction loss to optimize the local feature fusion module. The local reconstruction loss improves the quality of few-shot image generation by enforcing the generated images to closely resemble the corresponding local positions of input images in certain local regions. Finally, extensive experiments are conducted in image generation, image classification and image visualization.
KW - feature fusion
KW - few-shot image generation
KW - generative adversarial networks
KW - local reconstruction loss
UR - http://www.scopus.com/inward/record.url?scp=85207080772&partnerID=8YFLogxK
U2 - 10.1109/IPEC61310.2024.00026
DO - 10.1109/IPEC61310.2024.00026
M3 - Conference contribution
AN - SCOPUS:85207080772
T3 - Proceedings - 2024 Asia-Pacific Conference on Image Processing, Electronics and Computers, IPEC 2024
SP - 96
EP - 102
BT - Proceedings - 2024 Asia-Pacific Conference on Image Processing, Electronics and Computers, IPEC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 12 April 2024 through 14 April 2024
ER -