摘要
Distributed key-value storage and computation are essential components of cloud services. As the demand for high-performance systems has increased significantly, a new architecture has been motivated to separate computing and storage nodes and connect them using RDMA-enabled networks. Existing RDMA-enabled systems use client-side cached indexes to reduce communication overhead and improve performance. However, such approaches could result in high server CPU contention due to heavy dynamic workloads (i.e., <italic>inserts</italic>), and cause a large accuracy gap because of the different indexes between client-side and server-side. These drawbacks limit the performance of RDMA-enabled systems. In this paper, to deal with these issues, we introduce AStore to achieve high performance with low memory footprint. AStore employs a new uniformed architecture, utilizing an adaptive learned index as both the server-side learned index and the client-side cached index, to handle dynamic and static workloads. We propose several optimization techniques to optimize dynamic and static workload procedures and design the leaf node lock mechanism to support high concurrent access. Extensive evaluations on YCSB, LGN, and OSM datasets demonstrate that AStore achieves competitive performance on read-only workloads by up to 75.2%, 107.3% and 57.7%, as well as improving performance on write-read workloads by up to 65.7%, 108.7% and 74.3% than XStore.
源语言 | 英语 |
---|---|
页(从-至) | 1-18 |
页数 | 18 |
期刊 | IEEE Transactions on Knowledge and Data Engineering |
DOI | |
出版状态 | 已接受/待刊 - 2024 |