FLASHBACK: Efficient Retrieval-Augmented Language Modeling for Fast Inference

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Retrieval-Augmented Language Modeling (RALM) by integrating large language models (LLM) with relevant documents from an external corpus is a proven methodology for enabling the LLM to generate information beyond the scope of its pre-training corpus. Previous work by retrieving a set of tokens iteratively with retrieved content prepending to the input poses a high run-time issue, which degrades the inference efficiency of the LLMs because they fail to use the Key-Value (KV) cache efficiently. We propose FLASHBACK, a modular RALM designed to improve the inference efficiency of RALM with the appending context pattern while maintaining decent performance after fine-tuning by Low-Rank Adaption. FLASHBACK appends retrieved documents at the end of the context to efficiently utilize the KV cache. We also introduce the Marking Token as two special prompt tokens for marking the appending context during fine-tuning. Our experiments show that FLASHBACK can improve language modeling performance in the perplexity metric. We proved that the Marking Token is a usable add-on when fine-tuning models on specific context patterns. By bypassing unnecessary recomputation, FLASHBACK achieves fast inference speed with long context input. The inference speed is up to 4× faster than the prepending counterpart on a 7B LLM (Llama 2) in the runtime test.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics
Subtitle of host publicationACL 2025
EditorsWanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
PublisherAssociation for Computational Linguistics (ACL)
Pages595-608
Number of pages14
ISBN (Electronic)9798891762565
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025 - Vienna, Austria
Duration: 27 Jul 20251 Aug 2025

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

Conference63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Country/TerritoryAustria
CityVienna
Period27/07/251/08/25

Fingerprint

Dive into the research topics of 'FLASHBACK: Efficient Retrieval-Augmented Language Modeling for Fast Inference'. Together they form a unique fingerprint.

Cite this