Cache management with partitioning-aware eviction and thread-aware insertion/promotion policy

Junmin Wu*, Xiufeng Sui, Yixuan Tang, Xiaodong Zhu, Jing Wang, Guoliang Chen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

With recent advances of processor technology, the LRU based shared last-level cache (LLC) has been widely employed in modern Chip Multi-processors (CMP). However, past research [1,2,8,9] indicates that the cache performance of the LLC and further of the CMP processors may be degraded severely by LRU under the occurrence of the inter-thread interference or the excess of the working set size over the cache size. Existing approaches tackling this performance degradation problem have limited improvement of an overall cache performance because they usually focus on a single type of memory access behavior and thus lack full consideration of tradeoffs among different types of memory access behaviors. In this paper, we propose a unified cache management policy called Partitioning-Aware Eviction and Thread-aware Insertion/Promotion policy (PAE-TIP) that can effectively enhance capacity management, adaptive insertion/promotion, and further improve the overall cache performance. Specifically, PAE-TIP employs an adaptive mechanism to decide the position where to put the incoming lines or to move the hit lines, and chooses a victim line based on the target partitioning given by utility-based cache partitioning (UCP) [2]. In our study, we show that PAE-TIP can cover a variety of memory access behaviors simultaneously and provide a good tradeoff for overall cache performance improvement while retaining competitively low hardware and design overhead. The evaluation conducted on 4-way CMPs shows that the PAE-TIP-managed LLC can improve overall performance by19.3% on average over the LRU policy. Furthermore, the performance benefit of PAE-TIP is 1.09x compared to PIPP, 1.11x compared to TADIP and 1.12x compared to UCP.

Original languageEnglish
Title of host publicationProceedings - International Symposium on Parallel and Distributed Processing with Applications, ISPA 2010
Pages374-381
Number of pages8
DOIs
Publication statusPublished - 2010
Externally publishedYes
EventInternational Symposium on Parallel and Distributed Processing with Applications, ISPA 2010 - Taipei, Taiwan, Province of China
Duration: 6 Sept 20109 Sept 2010

Publication series

NameProceedings - International Symposium on Parallel and Distributed Processing with Applications, ISPA 2010

Conference

ConferenceInternational Symposium on Parallel and Distributed Processing with Applications, ISPA 2010
Country/TerritoryTaiwan, Province of China
CityTaipei
Period6/09/109/09/10

Keywords

  • Cache partitioning
  • Insertion
  • Promotion
  • Shared cache

Fingerprint

Dive into the research topics of 'Cache management with partitioning-aware eviction and thread-aware insertion/promotion policy'. Together they form a unique fingerprint.

Cite this