SS-LRU: A Smart Segmented LRU Caching

Chunhua Li, Man Wu, Yuhan Liu, Ke Zhou*, Ji Zhang, Yunqing Sun

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

Many caching policies use machine learning to predict data reuse, but they ignore the impact of incorrect prediction on cache performance, especially for large-size objects. In this paper, we propose a smart segmented LRU (SS-LRU) replacement policy, which adopts a size-aware classifier designed for cache scenarios and considers the cache cost caused by misprediction. Besides, SS-LRU enhances the migration rules of segmented LRU (SLRU) and implements a smart caching with unequal priorities and segment sizes based on prediction and multiple access patterns. We conducted Extensive experiments under the real-world workloads to demonstrate the superiority of our approach over state-of-the-art caching policies.

Original languageEnglish
Title of host publicationProceedings of the 59th ACM/IEEE Design Automation Conference, DAC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages397-402
Number of pages6
ISBN (Electronic)9781450391429
DOIs
Publication statusPublished - 10 Jul 2022
Externally publishedYes
Event59th ACM/IEEE Design Automation Conference, DAC 2022 - San Francisco, United States
Duration: 10 Jul 202214 Jul 2022

Publication series

NameProceedings - Design Automation Conference
ISSN (Print)0738-100X

Conference

Conference59th ACM/IEEE Design Automation Conference, DAC 2022
Country/TerritoryUnited States
CitySan Francisco
Period10/07/2214/07/22

Keywords

  • cache replacement
  • cost-sensitive
  • machine learning
  • smart

Fingerprint

Dive into the research topics of 'SS-LRU: A Smart Segmented LRU Caching'. Together they form a unique fingerprint.

Cite this