Cross-domain coreference modeling in dialogue state tracking with prompt learning

Heng Da Xu, Xian Ling Mao*, Puhai Yang, Fanshu Sun, Heyan Huang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

4 引用 (Scopus)

摘要

Dialogue state tracking (DST) is to identify user goals from dialogue context and it is an essential component in task-oriented dialogue systems. In multi-domain task-oriented dialogues, a user often refers to the information mentioned in the previous context when the topic is transferred to another domain. Accurate modeling of the cross-domain coreference plays an important role in building qualified DST systems. As far as we know, there is only one work attempting to model the cross-domain coreference phenomenon in DST. However, it still has low efficiency and suffers from the difficulty of handling the complex reasoning over the dialogue context. To tackle these problems, in this paper, we propose a simple but effective DST model, called Coref-DST, to track the cross-domain coreference slots. Instead of predicting the actual values via complex reasoning, Coref-DST directly identifies the coreferred domains and slots from the dialogue context. Moreover, a domain-specific prompt method is proposed to predict all the slot values in a domain simultaneously, so as to better capture the relationship among the slots. The extensive experimental results on the MultiWOZ 2.3 dataset demonstrate that Coref-DST not only outperforms the state-of-the-art DST baselines but also has higher efficiency of training and inference.

源语言英语
文章编号111189
期刊Knowledge-Based Systems
283
DOI
出版状态已出版 - 11 1月 2024

指纹

探究 'Cross-domain coreference modeling in dialogue state tracking with prompt learning' 的科研主题。它们共同构成独一无二的指纹。

引用此