VLSI Structure-aware Placement for Convolutional Neural Network Accelerator Units
Hosted in Virtual Platform
Physical Design and Verification, Lithography and DFM
DescriptionAI-dedicated hardware designs are growing dramatically for various AI applications. These designs bring new challenges to physical design, especially the severe routing congestion due to the near-fully-connected structure between two adjacent layers of neural networks in the hardware. Consequently, such dense interconnections incur severe congestion problems that cannot be solved by conventional placement methods. This paper proposes a VLSI structure-aware placement framework with kernel-based placement region insertion to minimize the congestion for the processing engines of convolutional neural network accelerators. Experimental results show that our framework significantly reduces global routing congestion without wirelength degradation, compared with leading commercial tools.