Title
Multi-Armed Bandits for Autonomous Timing-driven Design Optimization
Abstract
Timing closure is a complex process that involves many iterative optimization steps applied in various phases of the physical design flow. Cell sizing and transistor threshold selection, as well as datapath and clock buffering, are some of the tools available for design optimization. At the moment, design optimization methods are integrated into EDA tools and applied incrementally in various parts of the flow, while the optimal order of their application is yet to be determined. In this work, we rely on reinforcement learning - through the use of the Multi-Armed Bandit model for decision making under uncertainty - to automatically suggest online which optimization heuristic should be applied to the design. The goal is to improve the performance metrics based on the rewards learned from the previous applications of each heuristic. Experimental results show that automating the process of design optimization with machine learning not only results in designs that are close to the best-published results derived from deterministic approaches, but it also allows for the execution of the optimization flow without any human in the loop, and without any need for offline training of the heuristic-orchestration algorithm.
Year
DOI
Venue
2019
10.1109/PATMOS.2019.8862056
2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS)
Keywords
Field
DocType
autonomous timing-driven design optimization,timing closure,complex process,optimization steps,physical design flow,transistor threshold selection,datapath,clock buffering,design optimization methods,EDA tools,optimal order,multiarmed bandit model,optimization heuristic,optimization flow
Datapath,Heuristic,Mathematical optimization,Computer science,Real-time computing,Electronic design automation,Sizing,Physical design,Human-in-the-loop,Timing closure,Reinforcement learning
Conference
ISSN
ISBN
Citations 
2474-5456
978-1-7281-2104-8
0
PageRank 
References 
Authors
0.34
10
4