Abstract
Modern multi-core processors usually employ shared level 2 cache to support fast data access among concurrent threads. However, under the pressure of high resource demand, the commonly used LRU policy may result in interferences among threads and degrades the overall performance. Partitioning the shared cache is a relatively flexible resource allocation method, but most previous partition approaches aimed at multi-programmed workloads and they ignored the difference between shared and private data access patterns of multi-threaded workloads, leading to the utility decrease of the shared data. Most traditional cache partitioning methods aim at single memory access pattern, and neglect the frequency and recency information of cachelines. In this paper, we study the access characteristics of private and shared data in multi-thread workloads, and propose a utility-based pseudo partition cache partitioning mechanism (UPP). UPP dynamically collects utility information of each thread and shared data, and takes the overall marginal utility as the metric of cache partitioning. Besides, UPP exploits both frequency and recency information of a workload simultaneously, in order to evict dead cachelines early and filter less reused blocks through dynamic insertion and promotion mechanism.
Original language | English |
---|---|
Pages (from-to) | 170-180 |
Number of pages | 11 |
Journal | Jisuanji Yanjiu yu Fazhan/Computer Research and Development |
Volume | 50 |
Issue number | 1 |
Publication status | Published - Jan 2013 |
Externally published | Yes |
Keywords
- Insertion policy
- Multi-core processors
- Multi-threaded program
- Replacement algorithm
- Shared cache partitioning