TY - GEN
T1 - Leveraging Approximate Caching for Faster Retrieval-Augmented Generation
AU - Bergman, Shai
AU - Kermarrec, Anne Marie
AU - Petrescu, Diana
AU - Pires, Rafael
AU - Randl, Mathis
AU - De Vos, Martijn
AU - Zhang, Ji
N1 - Publisher Copyright:
© 2025 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2025/12/14
Y1 - 2025/12/14
N2 - Retrieval-augmented generation (RAG) improves the reliability of large language model (LLM) answers by integrating external knowledge. However, RAG increases the end-to-end inference time since looking for relevant documents from large vector databases is computationally expensive. To address this, we introduce Proximity, an approximate key-value cache that optimizes the RAG workflow by leveraging similarities in user queries. Instead of treating each query independently, Proximity reuses previously retrieved documents when similar queries appear, substantially reducing the reliance on expensive vector database lookups. To efficiently scale, Proximity employs a locality-sensitive hashing (LSH) scheme that enables fast cache lookups while preserving retrieval accuracy. We evaluate Proximity using the MMLU and MedRAG question-answering benchmarks. Our experiments demonstrate that Proximity with our LSH scheme and a realistically-skewed MedRAG workload reduces database calls by 77.2% while maintaining database recall and test accuracy. We experiment with different similarity tolerances and cache capacities, and show that the time spent within the Proximity cache remains low and constant (4.8 μs) even as the cache grows substantially in size. Our results demonstrate that approximate caching is a practical and effective strategy for optimizing RAG-based systems.
AB - Retrieval-augmented generation (RAG) improves the reliability of large language model (LLM) answers by integrating external knowledge. However, RAG increases the end-to-end inference time since looking for relevant documents from large vector databases is computationally expensive. To address this, we introduce Proximity, an approximate key-value cache that optimizes the RAG workflow by leveraging similarities in user queries. Instead of treating each query independently, Proximity reuses previously retrieved documents when similar queries appear, substantially reducing the reliance on expensive vector database lookups. To efficiently scale, Proximity employs a locality-sensitive hashing (LSH) scheme that enables fast cache lookups while preserving retrieval accuracy. We evaluate Proximity using the MMLU and MedRAG question-answering benchmarks. Our experiments demonstrate that Proximity with our LSH scheme and a realistically-skewed MedRAG workload reduces database calls by 77.2% while maintaining database recall and test accuracy. We experiment with different similarity tolerances and cache capacities, and show that the time spent within the Proximity cache remains low and constant (4.8 μs) even as the cache grows substantially in size. Our results demonstrate that approximate caching is a practical and effective strategy for optimizing RAG-based systems.
KW - approximate caching
KW - large language models
KW - latency reduction
KW - machine learning systems
KW - neural information retrieval
KW - query optimization
KW - retrieval-augmented generation
KW - vector databases
UR - https://www.scopus.com/pages/publications/105026727890
U2 - 10.1145/3721462.3770776
DO - 10.1145/3721462.3770776
M3 - Conference contribution
AN - SCOPUS:105026727890
T3 - Middleware 2025 - Proceedings of the 26th ACM International Middleware Conference
SP - 340
EP - 353
BT - Middleware 2025 - Proceedings of the 26th ACM International Middleware Conference
PB - Association for Computing Machinery, Inc
T2 - 26th ACM International Middleware Conference, Middleware 2025
Y2 - 15 December 2025 through 19 December 2025
ER -