Joint Contrastive Learning for Factual Consistency Evaluation of Cross-Lingual Abstract Summarization

Bokai Guo, Chong Feng*, Fang Liu, Xinyan Li, Xiaomei Wang

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

1 引用 (Scopus)

摘要

Current summarization models tend to generate erroneous or irrelevant summaries, i.e., factual inconsistency, which undoubtedly hinders the real-world application of summarization models. The difficulty in language alignment makes factual inconsistency in cross-lingual summarization (CLS) more common and factual consistency checking more challenging. Research on factual consistency has paid little attention to CLS due to the above difficulties, focusing mainly on monolingual summarization (MS). In this paper, we investigate the cross-lingual domain and propose a weakly supervised factual consistency evaluation model for CLS. In particular, we automatically synthesize large-scale datasets by a series of rule-based text transformations and manually annotate the test and validation sets. In addition, we also train the model jointly with contrastive learning to enhance the model’s ability to recognize factual errors. The experimental results on the manually annotated test set show that our model can effectively identify the consistency between the summaries and the source documents and outperform the baseline models.

源语言英语
主期刊名Machine Translation - 19th China Conference, CCMT 2023, Proceedings
编辑Yang Feng, Chong Feng
出版商Springer Science and Business Media Deutschland GmbH
116-127
页数12
ISBN(印刷版)9789819978939
DOI
出版状态已出版 - 2023
活动19th China Conference on Machine Translation, CCMT 2023 - Jinan, 中国
期限: 19 10月 202321 10月 2023

出版系列

姓名Communications in Computer and Information Science
1922 CCIS
ISSN(印刷版)1865-0929
ISSN(电子版)1865-0937

会议

会议19th China Conference on Machine Translation, CCMT 2023
国家/地区中国
Jinan
时期19/10/2321/10/23

指纹

探究 'Joint Contrastive Learning for Factual Consistency Evaluation of Cross-Lingual Abstract Summarization' 的科研主题。它们共同构成独一无二的指纹。

引用此