生成式文本质量的自动评估方法综述

Translated title of the contribution: A Survey of Automatic Evaluation on the Quality of Generated Text

Research output: Contribution to conferencePaperpeer-review

1 Citation (Scopus)

Abstract

Human evaluation, as the gold standard for assessing the quality of generated text, is prohibitively expensive. Automatic evaluation, on the other hand, aims to achieve high correlation with manual evaluation, thereby enabling automated analysis and assessment of generated text quality. With the iterative advancement of technologies in the field of natural language processing, the automatic evaluation of generated text quality has undergone several paradigm shifts. However, there is still a lack of systematic summarization of these automatic evaluation techniques in the academic community. Therefore, this paper first systematically summarizes the existing methods for automatic evaluation of generated text. It then analyzes the main development trends of these automatic evaluation methods. Finally, to provide a more comprehensive understanding of automatic evaluation, the paper discusses and anticipates future research directions in the field of automatic evaluation.

Translated title of the contributionA Survey of Automatic Evaluation on the Quality of Generated Text
Original languageChinese (Traditional)
Pages169-196
Number of pages28
Publication statusPublished - 2024
Externally publishedYes
Event23rd Chinese National Conference on Computational Linguistics, CCL 2024 - Taiyuan, China
Duration: 24 Jul 202428 Jul 2024

Conference

Conference23rd Chinese National Conference on Computational Linguistics, CCL 2024
Country/TerritoryChina
CityTaiyuan
Period24/07/2428/07/24

Fingerprint

Dive into the research topics of 'A Survey of Automatic Evaluation on the Quality of Generated Text'. Together they form a unique fingerprint.

Cite this