BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices
Hosted in Virtual Platform
DescriptionIn recent years, Graph-Neural-Networks appear to be state-of-the-art algorithms for graph-learning. However, the computational complexity of GNNs also grows exponentially. How to inference GNNs in real-time has become a challenging problem. In this paper we propose BlockGNN, a software-hardware co-design approach to realize efficient GNN acceleration. At the algorithm level, we leverage block-circulant weight matrices to compress GNNs. At the hardware design level, we propose a pipelined CirCore architecture to support efficient block-circulant matrices computation. Based on CirCore, a BlockGNN accelerator is presented to accelerate various GNNs. Moreover, we also introduce a performance model to determine the hardware parameters automatically.