PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning
TimeWednesday, December 8th3:30pm - 3:50pm PST
Event Type
Research Manuscript
Virtual Programs
Presented In-Person
RTL/Logic Level and High-level Synthesis
DescriptionIn this work, we present a reinforcement learning (RL) based approach to designing parallel prefix circuits such as adders or priority encoders that are fundamental to high-performance digital design. Unlike prior methods, our approach designs solutions tabula-rasa purely through learning with synthesis in the loop. We design a grid-based state-action representation and an RL environment for constructing legal prefix circuits. Deep Convolutional RL agents trained on this environment produce prefix adder circuits that pareto-dominate existing baselines with up to 16.0% and 30.2% lower area for the same delay in the 32b and 64b settings respectively.