TaiChi: A Hybrid Compression Format for Binary Sparse Matrix-Vector Multiplication on GPU

Jianhua Gao, Weixing Ji*, Zhaonian Tan, Yizhuo Wang, Feng Shi

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

8 引用 (Scopus)

摘要

Binary Sparse Matrix-Vector Multiplication (SpMV) is a heavy computational kernel in weblink analysis, integer factorization, compressed sensing, spectral graph theory, and other domains. Testing several popular GPU-based SpMV implementations on 400 sparse matrices, we observed that data transfer to GPU memory accounts for a large part of the total computation time. The transfer of constant value '1's can be easily eliminated for binary sparse matrices. However, compressing index arrays has always been a great challenge. This article proposes a new compression format TaiChi to further reduce index data copies and improve the performance of SpMV, especially for diagonally dominant binary sparse matrices. Input matrices are first partitioned into relatively dense and ultra-sparse areas. Then the dense areas are encoded inversely by marking '0's, while the ultra-sparse area is encoded by marking '1's. We also designed a new SpMV algorithm only using addition and subtraction for binary matrices based on our partition and encoding format. Evaluation results on real-world binary sparse matrices show that our hybrid encoding for binary matrix significantly reduces the data transfer and speeds up the kernel execution. It achieves the highest transfer and kernel execution speedups of 5.63x and 3.84x on GTX 1080 Ti, 3.39x and 3.91x on Tesla V100.

源语言英语
页(从-至)3732-3745
页数14
期刊IEEE Transactions on Parallel and Distributed Systems
33
12
DOI
出版状态已出版 - 1 12月 2022

指纹

探究 'TaiChi: A Hybrid Compression Format for Binary Sparse Matrix-Vector Multiplication on GPU' 的科研主题。它们共同构成独一无二的指纹。

引用此