TY - JOUR
T1 - Cross-domain coreference modeling in dialogue state tracking with prompt learning
AU - Xu, Heng Da
AU - Mao, Xian Ling
AU - Yang, Puhai
AU - Sun, Fanshu
AU - Huang, Heyan
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2024/1/11
Y1 - 2024/1/11
N2 - Dialogue state tracking (DST) is to identify user goals from dialogue context and it is an essential component in task-oriented dialogue systems. In multi-domain task-oriented dialogues, a user often refers to the information mentioned in the previous context when the topic is transferred to another domain. Accurate modeling of the cross-domain coreference plays an important role in building qualified DST systems. As far as we know, there is only one work attempting to model the cross-domain coreference phenomenon in DST. However, it still has low efficiency and suffers from the difficulty of handling the complex reasoning over the dialogue context. To tackle these problems, in this paper, we propose a simple but effective DST model, called Coref-DST, to track the cross-domain coreference slots. Instead of predicting the actual values via complex reasoning, Coref-DST directly identifies the coreferred domains and slots from the dialogue context. Moreover, a domain-specific prompt method is proposed to predict all the slot values in a domain simultaneously, so as to better capture the relationship among the slots. The extensive experimental results on the MultiWOZ 2.3 dataset demonstrate that Coref-DST not only outperforms the state-of-the-art DST baselines but also has higher efficiency of training and inference.
AB - Dialogue state tracking (DST) is to identify user goals from dialogue context and it is an essential component in task-oriented dialogue systems. In multi-domain task-oriented dialogues, a user often refers to the information mentioned in the previous context when the topic is transferred to another domain. Accurate modeling of the cross-domain coreference plays an important role in building qualified DST systems. As far as we know, there is only one work attempting to model the cross-domain coreference phenomenon in DST. However, it still has low efficiency and suffers from the difficulty of handling the complex reasoning over the dialogue context. To tackle these problems, in this paper, we propose a simple but effective DST model, called Coref-DST, to track the cross-domain coreference slots. Instead of predicting the actual values via complex reasoning, Coref-DST directly identifies the coreferred domains and slots from the dialogue context. Moreover, a domain-specific prompt method is proposed to predict all the slot values in a domain simultaneously, so as to better capture the relationship among the slots. The extensive experimental results on the MultiWOZ 2.3 dataset demonstrate that Coref-DST not only outperforms the state-of-the-art DST baselines but also has higher efficiency of training and inference.
KW - Cross-domain coreference
KW - Dialogue state tracking
KW - Prompt learning
KW - Task-oriented dialogue
UR - http://www.scopus.com/inward/record.url?scp=85181669719&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2023.111189
DO - 10.1016/j.knosys.2023.111189
M3 - Article
AN - SCOPUS:85181669719
SN - 0950-7051
VL - 283
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 111189
ER -