Close

Presentation

VLSI Structure-aware Placement for Convolutional Neural Network Accelerator Units
Time
Location
Event Type
Research Manuscript
Virtual Programs
Hosted in Virtual Platform
Keywords
Physical Design and Verification, Lithography and DFM
Topics
EDA
DescriptionAI-dedicated hardware designs are growing dramatically for various AI applications. These designs bring new challenges to physical design, especially the severe routing congestion due to the near-fully-connected structure between two adjacent layers of neural networks in the hardware. Consequently, such dense interconnections incur severe congestion problems that cannot be solved by conventional placement methods. This paper proposes a VLSI structure-aware placement framework with kernel-based placement region insertion to minimize the congestion for the processing engines of convolutional neural network accelerators. Experimental results show that our framework significantly reduces global routing congestion without wirelength degradation, compared with leading commercial tools.