TY - GEN
T1 - Joint Contrastive Learning for Factual Consistency Evaluation of Cross-Lingual Abstract Summarization
AU - Guo, Bokai
AU - Feng, Chong
AU - Liu, Fang
AU - Li, Xinyan
AU - Wang, Xiaomei
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2023
Y1 - 2023
N2 - Current summarization models tend to generate erroneous or irrelevant summaries, i.e., factual inconsistency, which undoubtedly hinders the real-world application of summarization models. The difficulty in language alignment makes factual inconsistency in cross-lingual summarization (CLS) more common and factual consistency checking more challenging. Research on factual consistency has paid little attention to CLS due to the above difficulties, focusing mainly on monolingual summarization (MS). In this paper, we investigate the cross-lingual domain and propose a weakly supervised factual consistency evaluation model for CLS. In particular, we automatically synthesize large-scale datasets by a series of rule-based text transformations and manually annotate the test and validation sets. In addition, we also train the model jointly with contrastive learning to enhance the model’s ability to recognize factual errors. The experimental results on the manually annotated test set show that our model can effectively identify the consistency between the summaries and the source documents and outperform the baseline models.
AB - Current summarization models tend to generate erroneous or irrelevant summaries, i.e., factual inconsistency, which undoubtedly hinders the real-world application of summarization models. The difficulty in language alignment makes factual inconsistency in cross-lingual summarization (CLS) more common and factual consistency checking more challenging. Research on factual consistency has paid little attention to CLS due to the above difficulties, focusing mainly on monolingual summarization (MS). In this paper, we investigate the cross-lingual domain and propose a weakly supervised factual consistency evaluation model for CLS. In particular, we automatically synthesize large-scale datasets by a series of rule-based text transformations and manually annotate the test and validation sets. In addition, we also train the model jointly with contrastive learning to enhance the model’s ability to recognize factual errors. The experimental results on the manually annotated test set show that our model can effectively identify the consistency between the summaries and the source documents and outperform the baseline models.
KW - Contrastive learning
KW - Cross-lingual summarization
KW - Factual consistency evaluation
UR - http://www.scopus.com/inward/record.url?scp=85177235055&partnerID=8YFLogxK
U2 - 10.1007/978-981-99-7894-6_11
DO - 10.1007/978-981-99-7894-6_11
M3 - Conference contribution
AN - SCOPUS:85177235055
SN - 9789819978939
T3 - Communications in Computer and Information Science
SP - 116
EP - 127
BT - Machine Translation - 19th China Conference, CCMT 2023, Proceedings
A2 - Feng, Yang
A2 - Feng, Chong
PB - Springer Science and Business Media Deutschland GmbH
T2 - 19th China Conference on Machine Translation, CCMT 2023
Y2 - 19 October 2023 through 21 October 2023
ER -