BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture
Hosted in Virtual Platform
Approximate Computing for AI/ML
DescriptionTo deploy deep learning algorithms on resource-limited scenarios, an emerging device-resistive random access memory (ReRAM) has been regarded as promising via analog computing. However, the practicability of ReRAM is primarily limited due to the weight variations in the neural networks due to multi-factor reasons, including manufacturing, thermal noises, and etc. In this paper, we first explore the neural network architecture candidates that are robust to weight variations and then propose a Bayesian optimization framework to search for robust deep neural network architectures. Empirical experiments demonstrate that our algorithmic framework has outperformed the state-of-the-art methods by a large margin.