TY - JOUR
T1 - Building knowledge-grounded dialogue systems with graph-based semantic modelling
AU - Yang, Yizhe
AU - Huang, Heyan
AU - Gao, Yang
AU - Li, Jiawei
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/8/15
Y1 - 2024/8/15
N2 - The knowledge-grounded dialogue task aims to generate responses that convey information from given knowledge documents. However, it is a challenge for the current sequence-based model to acquire knowledge from complex documents and integrate it to perform correct responses without the aid of an explicit semantic structure. To address these issues, we propose a novel graph structure, Grounded Graph (G2), that models the semantic structure of both dialogue and knowledge to facilitate knowledge selection and integration for knowledge-grounded dialogue generation. We also propose a Grounded Graph Aware Transformer (G2AT) model that fuses multi-forms knowledge (both sequential and graphic) to enhance knowledge-grounded response generation. Our experiments results show that our proposed model outperforms the previous state-of-the-art methods with more than 10% gains in response generation and nearly 20% improvement in factual consistency. Further, our model reveals good generalization ability and robustness. By incorporating semantic structures as prior knowledge in deep neural networks, our model provides an effective way to aid language generation.
AB - The knowledge-grounded dialogue task aims to generate responses that convey information from given knowledge documents. However, it is a challenge for the current sequence-based model to acquire knowledge from complex documents and integrate it to perform correct responses without the aid of an explicit semantic structure. To address these issues, we propose a novel graph structure, Grounded Graph (G2), that models the semantic structure of both dialogue and knowledge to facilitate knowledge selection and integration for knowledge-grounded dialogue generation. We also propose a Grounded Graph Aware Transformer (G2AT) model that fuses multi-forms knowledge (both sequential and graphic) to enhance knowledge-grounded response generation. Our experiments results show that our proposed model outperforms the previous state-of-the-art methods with more than 10% gains in response generation and nearly 20% improvement in factual consistency. Further, our model reveals good generalization ability and robustness. By incorporating semantic structures as prior knowledge in deep neural networks, our model provides an effective way to aid language generation.
KW - Knowledge acquisition
KW - Knowledge fusion
KW - Knowledge-grounded dialogue
KW - Natural language generation
UR - http://www.scopus.com/inward/record.url?scp=85194372859&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2024.111943
DO - 10.1016/j.knosys.2024.111943
M3 - Article
AN - SCOPUS:85194372859
SN - 0950-7051
VL - 298
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 111943
ER -