Optimization of Large-Scale Sparse Matrix-Vector Multiplication on Multi-GPU Systems

  • Jianhua Gao
  • , Weixing Ji*
  • , Yizhuo Wang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Sparse matrix-vector multiplication (SpMV) is one of the important kernels of many iterative algorithms for solving sparse linear systems. The limited storage and computational resources of individual GPUs restrict both the scale and speed of SpMV computing in problem-solving. As real-world engineering problems continue to increase in complexity, the imperative for collaborative execution of iterative solving algorithms across multiple GPUs is increasingly apparent. Although the multi-GPU-based SpMV takes less kernel execution time, it also introduces additional data transmission overhead, which diminishes the performance gains derived from parallelization across multi-GPUs. Based on the non-zero elements distribution characteristics of sparse matrices and the tradeoff between redundant computations and data transfer overhead, this article introduces a series of SpMV optimization techniques tailored for multi-GPU environments and effectively enhances the execution efficiency of iterative algorithms on multiple GPUs. First, we propose a two-level non-zero elements-based matrix partitioning method to increase the overlap of kernel execution and data transmission. Then, considering the irregular non-zero elements distribution in sparse matrices, a long-row-aware matrix partitioning method is proposed to hide more data transmissions. Finally, an optimization using redundant and inexpensive short-row execution to exchange costly data transmission is proposed. Our experimental evaluation demonstrates that, compared with the SpMV on a single GPU, the proposed method achieves an average speedup of 2.00× and 1.85× on platforms equipped with two RTX 3090 and two Tesla V100-SXM2, respectively. The average speedup of 2.65× is achieved on a platform equipped with four Tesla V100-SXM2.

Original languageEnglish
Article number69
JournalTransactions on Architecture and Code Optimization
Volume21
Issue number4
DOIs
Publication statusPublished - 19 Nov 2024

Keywords

  • Multi-GPU system
  • data transmission hiding
  • sparse matrix partitioning
  • sparse matrix-vector multiplication

Fingerprint

Dive into the research topics of 'Optimization of Large-Scale Sparse Matrix-Vector Multiplication on Multi-GPU Systems'. Together they form a unique fingerprint.

Cite this