MOVED TO VIRTUAL: 2nd ROAD4NN Workshop: Research Open Automatic Design for Neural Networks
TimeSunday, December 5th8:00am - 5:00pm PST
DescriptionIn the past decade, machine learning, especially neural network based deep learning, has achieved an amazing success. Various neural networks, such as CNNs, RNNs, LSTMs, BERT, GNNs, and SNNs, have been deployed for various industrial applications like image classification, speech recognition, and automated control. On one hand, there is a very fast algorithm evolvement of neural network models, almost every week there is a new model from a major academic and/or industry institute. On the other hand, all major industry giants have been developing and/or deploying specialized hardware platforms to accelerate the performance and energy-efficiency of neural networks across the cloud and edge devices. This include Nvidia GPU, Intel Nervana/Habana/Loihi ASICs, Xilinx FPGA, Google TPU, Microsoft Brainwave, Amazon Inferentia, to name just a few. However, there is a significant gap between the fast algorithm evolvement and staggering hardware development, hence calling for broader participation in software-hardware co-design from both academia and industry.
In this workshop, we focus on the research open automatic design for neural networks, a holistic open source approach to general-purpose computer systems broadly inspired by neural networks. More specifically, we discuss full stack open source infrastructure support to develop and deploy novel neural networks, including novel algorithms and applications, hardware architectures and emerging devices, as well as programming, system, and tool support. We plan to bring together academic and industry experts to share their experience, discuss challenges they face as well as potential focus areas for the community. Below is the planed workshop content.
First, we will solicit work-in-progress papers (four pages) from the community and accepted papers will be published in proceedings (optional) and be invited to give a 25 mins talk with 5 min Q/A for each talk. Workshop topics include, but are not limited to:
• New algorithm advancement of neural networks
• Bio-plausible neural network models
• Neural network model compression and quantization
• Application of neural networks into new areas
• Hardware acceleration and architecture for neural networks
• New circuits and devices for neural networks
• Abstraction to bridge the algorithm and hardware gap for neural networks
• Compilation and design automation support to map neural networks to hardware platforms
• System support to deploy neural networks in cloud and edge devices
• Benchmarks for various neural network models and hardware accelerators
• Other research infrastructures that enable the above studies
Second, we plan to invite a few established researchers both from academic (e.g., MIT, UCLA, Cornell, Stanford, Duke, Tsinghua University, Peking University) and/or industry (e.g., Intel, Xilinx, Nvidia, Microsoft, Google) to give a keynote and invited talks.
Third, we plan to organize a panel session to give the audience more Q/A time. We plan to invite some established researchers from the aforementioned institutes for the panel.