Prompt-Based Learning for Unpaired Image Captioning

Peipei Zhu, Xiao Wang, Lin Zhu, Zhenglong Sun*, Wei Shi Zheng, Yaowei Wang*, Changwen Chen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

Unpaired Image Captioning (UIC) has been developed to learn image descriptions from unaligned vision-language sample pairs. Existing works usually tackle this task using adversarial learning and visual concept reward based on reinforcement learning. However, these existing works were only able to learn limited cross-domain information in vision and language domains, which restrains the captioning performance of UIC. Inspired by the success of Vision-Language Pre-Trained Models (VL-PTMs) in this research, we attempt to infer the cross-domain cue information about a given image from the large VL-PTMs for the UIC task. This research is also motivated by recent successes of prompt learning in many downstream multi-modal tasks, including image-text retrieval and vision question answering. In this work, a semantic prompt is introduced and aggregated with visual features for more accurate caption prediction under the adversarial learning framework. In addition, a metric prompt is designed to select high-quality pseudo image-caption samples obtained from the basic captioning model and refine the model in an iterative manner. Extensive experiments on the COCO and Flickr30 K datasets validate the promising captioning ability of the proposed model. We expect that the proposed prompt-based UIC model will stimulate a new line of research for the VL-PTMs based captioning.

Original languageEnglish
Pages (from-to)379-393
Number of pages15
JournalIEEE Transactions on Multimedia
Volume26
DOIs
Publication statusPublished - 2024

Keywords

  • Metric prompt
  • prompt-based learning
  • semantic prompt
  • unpaired image captioning

Fingerprint

Dive into the research topics of 'Prompt-Based Learning for Unpaired Image Captioning'. Together they form a unique fingerprint.

Cite this