Close

Presentation

Accelerator Architectures and Machine Learning
TimeTuesday, December 7th1:00pm - 1:45pm PST
LocationDAC Pavilion
Event Type
SKY Talk
Virtual Programs
Hosted in Virtual Platform
Presented In-Person
DescriptionOver the past decade, Deep Neural Network (DNN) workloads have dramatically increased the computational requirements of AI Training and Inference systems - significantly outpacing the performance gains obtained traditionally using Moore's law of silicon scaling. New computer architectures, powered by low precision arithmetic engines (FP16 for training and INT8 for Inference), have laid the foundation for high performance AI systems - however, there remains an insatiable desire for AI compute with much higher power-efficiency and performance. In this talk, I'll outline some of the exciting innovations as well as key technical challenges - that can enable systems with aggressively scaled precision for inference and training, while fully preserving model fidelity. I'll also highlight some key complementary trends, including 3D stacking, sparsity and analog computing, that can enable dramatic growth in the AI system capabilities over the next decade.