Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model

Xiaolin Chen, Xuemeng Song*, Liqiang Jing, Shuo Li, Linmei Hu, Liqiang Nie*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

Text response generation for multimodal task-oriented dialog systems, which aims to generate the proper text response given the multimodal context, is an essential yet challenging task. Although existing efforts have achieved compelling success, they still suffer from two pivotal limitations: (1) overlook the benefit of generative pretraining and (2) ignore the textual context-related knowledge. To address these limitations, we propose a novel dual knowledge-enhanced generative pretrained language mode for multimodal task-oriented dialog systems (DKMD), consisting of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation. To be specific, the dual knowledge selection component aims to select the related knowledge according to both textual and visual modalities of the given context. Thereafter, the dual knowledge-enhanced context learning component targets seamlessly, integrating the selected knowledge into the multimodal context learning from both global and local perspectives, where the cross-modal semantic relation is also explored. Moreover, the knowledge-enhanced response generation component comprises a revised BART decoder, where an additional dot-product knowledge-decoder attention sub-layer is introduced for explicitly utilizing the knowledge to advance the text response generation. Extensive experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.

Original languageEnglish
Article number53
JournalACM Transactions on Information Systems
Volume42
Issue number2
DOIs
Publication statusPublished - 7 Nov 2023

Keywords

  • Multimodal task-oriented dialog systems
  • dual knowledge selection
  • generative pretrained language model
  • text response generation

Fingerprint

Dive into the research topics of 'Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model'. Together they form a unique fingerprint.

Cite this