ReFF: Reinforcing Format Faithfulness in Language Models Across Varied Tasks

Jiashu Yao, Heyan Huang, Zeming Liu, Haoyu Wen, Wei Su, Boao Qian, Yuhang Guo*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Following formatting instructions to generate well-structured content is a fundamental yet often unmet capability for large language models (LLMs). To study this capability, which we refer to as format faithfulness, we present FORMATBENCH, a comprehensive format-related benchmark. Compared to previous format-related benchmarks, FORMATBENCH involves a greater variety of tasks in terms of application scenes (traditional NLP tasks, creative works, autonomous agency tasks), human-LLM interaction styles (single-turn instruction, multi-turn chat), and format types (inclusion, wrapping, length, coding). Moreover, each task in FORMATBENCH is attached with a format checker program. Extensive experiments on the benchmark reveal that state-of-the-art open- and closed-source LLMs still suffer from severe deficiency in format faithfulness. By virtue of the decidable nature of formats, we propose to Reinforce Format Faithfulness (REFF) to help LLMs generate formatted output as instructed without compromising general quality. Without any annotated data, REFF can substantially improve the format faithfulness rate (e.g., from 21.6% in original LLaMA3 to 95.0% on caption segmentation task), while keep the general quality comparable (e.g., from 47.3 to 46.4 in F1 scores). Combined with labeled training data, REFF can simultaneously improve both format faithfulness (e.g., from 21.6% in original LLaMA3 to 75.5%) and general quality (e.g., from 47.3 to 61.6 in F1 scores). We further offer an interpretability analysis to explain how REFF improves both format faithfulness and general quality.

Original languageEnglish
Pages (from-to)25660-25668
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number24
DOIs
Publication statusPublished - 11 Apr 2025
Externally publishedYes
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025

Fingerprint

Dive into the research topics of 'ReFF: Reinforcing Format Faithfulness in Language Models Across Varied Tasks'. Together they form a unique fingerprint.

Cite this