How Vision-Language Tasks Benefit from Large Pre-trained Models: A Survey

  • Yayun Qi
  • , Hongxi Li
  • , Yiqi Song
  • , Xinxiao Wu*
  • , Jiebo Luo
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The exploration of various vision-language tasks, such as visual captioning, visual question answering, and visual commonsense reasoning, is an important area in artificial intelligence and continuously attracts the attention of the research community. Despite improvements in overall performance, classic challenges still exist in vision-language tasks and hinder the development of this area. In recent years, the rise of pre-trained models is driving the research on vision-language tasks. Because of the massive scale of training data and model parameters, pre-trained models have exhibited excellent performance in numerous downstream tasks. Inspired by the powerful capabilities of pre-trained models, new paradigms have emerged to solve the classic challenges. Such methods have become mainstream in current research with increasing attention and rapid advances. In this paper, we present a comprehensive overview of how vision-language tasks benefit from pre-trained models. First, we review several main challenges in vision-language tasks and discuss the limitations of previous solutions before the era of pre-training. Next, we summarize the recent advances in incorporating pre-trained models to address the challenges in vision-language tasks. Finally, we analyze the potential risks associated with the inherent limitations of pre-trained models, discuss possible solutions, and attempt to provide future research directions.

Original languageEnglish
JournalIEEE Transactions on Multimedia
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • Large language model
  • Pre-trained model
  • Vision-language model
  • Vision-language task

Fingerprint

Dive into the research topics of 'How Vision-Language Tasks Benefit from Large Pre-trained Models: A Survey'. Together they form a unique fingerprint.

Cite this