Cross-domain coreference modeling in dialogue state tracking with prompt learning

Heng Da Xu, Xian Ling Mao*, Puhai Yang, Fanshu Sun, Heyan Huang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Dialogue state tracking (DST) is to identify user goals from dialogue context and it is an essential component in task-oriented dialogue systems. In multi-domain task-oriented dialogues, a user often refers to the information mentioned in the previous context when the topic is transferred to another domain. Accurate modeling of the cross-domain coreference plays an important role in building qualified DST systems. As far as we know, there is only one work attempting to model the cross-domain coreference phenomenon in DST. However, it still has low efficiency and suffers from the difficulty of handling the complex reasoning over the dialogue context. To tackle these problems, in this paper, we propose a simple but effective DST model, called Coref-DST, to track the cross-domain coreference slots. Instead of predicting the actual values via complex reasoning, Coref-DST directly identifies the coreferred domains and slots from the dialogue context. Moreover, a domain-specific prompt method is proposed to predict all the slot values in a domain simultaneously, so as to better capture the relationship among the slots. The extensive experimental results on the MultiWOZ 2.3 dataset demonstrate that Coref-DST not only outperforms the state-of-the-art DST baselines but also has higher efficiency of training and inference.

Original languageEnglish
Article number111189
JournalKnowledge-Based Systems
Volume283
DOIs
Publication statusPublished - 11 Jan 2024

Keywords

  • Cross-domain coreference
  • Dialogue state tracking
  • Prompt learning
  • Task-oriented dialogue

Fingerprint

Dive into the research topics of 'Cross-domain coreference modeling in dialogue state tracking with prompt learning'. Together they form a unique fingerprint.

Cite this