TY - JOUR
T1 - Optimization of Sparse Matrix Computation for Algebraic Multigrid on GPUs
AU - Wang, Yizhuo
AU - Chang, Fangli
AU - Wei, Bingxin
AU - Gao, Jianhua
AU - Ji, Weixing
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/9/14
Y1 - 2024/9/14
N2 - AMG is one of the most efficient and widely used methods for solving sparse linear systems. The computational process of AMG mainly consists of a series of iterative calculations of generalized sparse matrix-matrix multiplication (SpGEMM) and sparse matrix-vector multiplication (SpMV). Optimizing these sparse matrix calculations is crucial for accelerating solving linear systems. In this paper, we first focus on optimizing the SpGEMM algorithm in AmgX, a popular AMG library for GPUs. We propose a new algorithm called SpGEMM-upper, which achieves an average speedup of 2.02× on Tesla V100 and 1.96× on RTX 3090 against the original algorithm. Next, through experimental investigation, we conclude that no single SpGEMM library or algorithm performs optimally for most sparse matrices, and the same holds true for SpMV. Therefore, we build machine learning-based models to predict the optimal SpGEMM and SpMV used in the AMG calculation process. Finally, we integrate the prediction models, SpGEMM-upper, and other selected algorithms into a framework for adaptive sparse matrix computation in AMG. Our experimental results prove that the framework achieves promising performance improvements on the test set.
AB - AMG is one of the most efficient and widely used methods for solving sparse linear systems. The computational process of AMG mainly consists of a series of iterative calculations of generalized sparse matrix-matrix multiplication (SpGEMM) and sparse matrix-vector multiplication (SpMV). Optimizing these sparse matrix calculations is crucial for accelerating solving linear systems. In this paper, we first focus on optimizing the SpGEMM algorithm in AmgX, a popular AMG library for GPUs. We propose a new algorithm called SpGEMM-upper, which achieves an average speedup of 2.02× on Tesla V100 and 1.96× on RTX 3090 against the original algorithm. Next, through experimental investigation, we conclude that no single SpGEMM library or algorithm performs optimally for most sparse matrices, and the same holds true for SpMV. Therefore, we build machine learning-based models to predict the optimal SpGEMM and SpMV used in the AMG calculation process. Finally, we integrate the prediction models, SpGEMM-upper, and other selected algorithms into a framework for adaptive sparse matrix computation in AMG. Our experimental results prove that the framework achieves promising performance improvements on the test set.
KW - Algebraic multigrid
KW - generalized sparse matrix-matrix multiplication
KW - GPU
KW - machine learning
KW - sparse matrix-vector multiplication
UR - http://www.scopus.com/inward/record.url?scp=85205001716&partnerID=8YFLogxK
U2 - 10.1145/3664924
DO - 10.1145/3664924
M3 - Article
AN - SCOPUS:85205001716
SN - 1544-3566
VL - 21
JO - Transactions on Architecture and Code Optimization
JF - Transactions on Architecture and Code Optimization
IS - 3
M1 - 54
ER -