Close

Presentation

UPTPU: Improving Energy Efficiency of a Tensor Processing Unit through Underutilization Based Power-Gating
TimeTuesday, December 7th3:30pm - 3:50pm PST
Location3016
Event Type
Research Manuscript
Virtual Programs
Presented In-Person
Keywords
System-on-Chip Design Methodology
Topics
EDA
DescriptionThe AI boom is bringing a plethora of domain-specific architectures
for Neural Network computations. Google’s Tensor Processing Unit (TPU), a Deep
Neural Network (DNN) accelerator, has replaced the CPUs/GPUs in its data centers,
claiming more than 15x rate of inference. However, the unprecedented growth in
DNN workloads with the widespread use of AI services projects an increasing
energy consumption of TPU based data centers. In this work, we parametrize the
extreme hardware underutilization in TPU systolic array and propose UPTPU:
an intelligent, dataflow adaptive power-gating paradigm to provide a staggering
3.5x−6.5x energy efficiency to TPU for different input batch sizes.