Towards Faithful Dialogs via Focus Learning

Yifan Deng, Xingsheng Zhang*, Heyan Huang*, Yue Hu

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

4 引用 (Scopus)

摘要

Maintaining faithfulness between responses and knowledge is an important research topic for building reliable knowledge-grounded dialogue systems. Existing models heavily rely on the elaborate data engineering and increasing the model's parameters ignoring to track the tokens that significantly influence losses, which is decisive for the optimization direction of the model in each iteration. To address this issue, we propose Focus Learning (FocusL), a novel learning approach that adjusts the contribution of each token to the optimization direction by directly scaling the corresponding objective loss. Specifically, we first introduce a positioning method by utilizing relevance distributions between knowledge and each response token to locate knowledge-aware tokens. Then, we further design a relevance-to-weight transformation to provide dynamic token-level weights for adjusting the cross-entropy loss. Finally, we use the weighted loss to encourage the model to pay special attention to the knowledge utilization. Experimental results demonstrate that our method achieves the new state-of-the-art results and generates more reliable responses while maintaining training stability.

源语言英语
主期刊名Long Papers
出版商Association for Computational Linguistics (ACL)
4554-4566
页数13
ISBN(电子版)9781959429722
出版状态已出版 - 2023
活动61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, 加拿大
期限: 9 7月 202314 7月 2023

出版系列

姓名Proceedings of the Annual Meeting of the Association for Computational Linguistics
1
ISSN(印刷版)0736-587X

会议

会议61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
国家/地区加拿大
Toronto
时期9/07/2314/07/23

指纹

探究 'Towards Faithful Dialogs via Focus Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此