Workshop 1
AI/Machine Learning for Circuit Design and Optimization
Organizer: Xin Zhang (IBM)
Circuit designers have long been trying to improve productivity in circuit design by automation. With the recent development of AI and machine learning algorithms, there is increasingly more explorations on how to utilize AI techniques to facilitate circuit design. Analog/mixed-signal circuits are ubiquitous, researchers have tried many approaches to shorten the design cycle, including automatic transistor sizing, symmetrical placement and routing, circuit generator, etc. Various recent work is taking advantage of AI/machine learning techniques, like reinforcement learning, graph neural networks, graph matching and knowledge transfer, to improve the performance and accuracy of circuit design/simulation/layout. |Recent development in this area has shown the potential of strong impact on how circuit is designed. The objectives of this workshop is to bring circuit designer together and provide the opportunities to discuss, define and understand the potential of AI/machine learning assisted circuit design. Pioneer researchers in the field will be invited to present their recent work for different circuit design use cases. Through the series of presentations in this workshop, we are looking for a viable path to enhance the productivity of Analog/mixed-signal circuit design.
Live Session: June 13, 7:00 AM-9:00AM (JST)
About Xin Zhang
Xin Zhang is a Research Staff Member at IBM T. J. Watson Research Center, Yorktown Heights, NY. He received the B.S. degree in electronics engineering from Xi’an Jiaotong University, Xi’an, China, in 2003, and the Ph.D. degree in microelectronics from Peking University, Beijing, China, in 2008. In 2008, he joined the Institute of Industrial Science, University of Tokyo, Tokyo, Japan, as a project researcher. In 2012, he was a visiting scholar at the University of California, Berkeley, and then a project research associate at the Institute of Industrial Science, University of Tokyo, Tokyo, Japan. In 2013, he was with the Institute of Microelectronics (IME), Agency for Science, Technology and Research (A*STAR), Singapore, as a scientist. Since 2014, he joined IBM as a Research Staff Member. Dr. Zhang has authored or co-authored over 50 technical papers and has over 20 filed or issued patents. His research interests include analog circuits, power management ICs, DC-DC converters, AC-DC converters, power devices, magnetics, machine learning hardware/accelerators, computer system architecture, server system power delivery/packaging/cooling. He has served as guest editors for IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) and IEEE Solid State Circuits Letters (SSC-L). He is currently serving as technical program committee members for the IEEE VLSI symposium on circuits and the Applied Power Electronics Conference (APEC). He is a Senior Member of IEEE.
- 1. Machine Learning for Agile IC Design and Manufacturing, David Pan, University of Texas at Austin
-
Abstract:
This talk will present some of our recent efforts leveraging AI/machine learning with domain-specific customizations for agile IC design and manufacturing closure. I will first show how we leverage deep learning hardware and software to develop a new open-source VLSI placement engine, DREAMPlace [DAC’19 Best Paper Award, TCAD 2020], which is around 40x faster than the previous state-of-the-art academic global placer with high quality. I will then present the open-source MAGICAL 1.0 leveraging both machine and human intelligence to produce fully automated analog layouts from netlists to GDSII with a silicon-proven 40nm 1GS/s ∆Σ ADC [CICC’21]. I will further show how we leverage recent AI breakthrough in generative adversarial network (GAN) to develop end-to-end lithography modeling with orders of magnitude speedup [DAC’19 Best Paper Candidate, ISPD’20 Best Paper Award], which can help design technology co-optimization and manufacturing closure.
- 2. Machine Learning for Analog and Digital Design, Song Han, MIT
-
Abstract:
Machine learning is revolutionizing the chip design automation. On the analog side, we introduce the GCN-RL circuit designer (DAC’20); on the digital side, we introduce NAAS: neural accelerator architecture search (DAC’21).
Automatic transistor sizing is a challenging problem due to the large design space, complex performance trade-offs, and fast technological advancements. Although there has been plenty of work on transistor sizing targeting on one circuit, limited research has been done on transferring the knowledge from one circuit to another to reduce the re-design overhead. GCN-RL Circuit Designer leverages reinforcement learning (RL) to transfer the knowledge between different technology nodes and topologies. Moreover, inspired by that circuit is a graph, we learn on the circuit topology representation with graph convolutional neural networks (GCN). The GCN-RL agent extracts features of the topology graph whose vertices are transistors, edges are wires. Our learning-based optimization consistently achieves the highest Figures of Merit (FoM) on four different circuits over conventional Bayesian Optimization, Evolutionary Algorithms, random search, and human expert designs. Experiments show that transfer learning between five technology nodes has on average 1.53x higher FoMs than without transfer. Transfer between two circuit topologies has on average 1.14x higher FoMs. Our transferable optimization method makes transistor sizing and design porting more effective and efficient. More info at gcnrl.mit.edu.
For the digital side, developing efficient neural network model architecture and hardware architecture together is challenging. To tackle this challenge, we propose a machine learning based design and optimization methodology, Neural Accelerator Architecture Search (NAAS). It holistically performs hardware architecture search, quantized neural architecture search, and tiling search together in one optimization loop that can compose highly matched neural-hardware architectures. NAAS can rival the best human-designed architectures by an additional 1.9X speedup and 1.6x energy savings with the same ImageNet accuracy. The evolved hardware architecture search can also effectively generalize to non-deep-learning accelerators.
- 3. Reinforcement Learning for Analog EDA, Lihong Zhang, Memorial University of Newfoundland
-
Abstract:
This presentation will discuss the application of reinforcement learning to address analog electronic design automation (EDA) challenges. Upon the basics of the reinforcement leaning technique, we will talk about our recent studies in the areas of analog circuit sizing and analog layout placement. Some research insights on reinforcement learning in analog EDA will be presented as well.
- 4. Improving Circuit Design Productivity with Latest ML Methods, Haoxing (Mark) Ren, NVIDIA Research
-
Abstract:
How do we leverage rapid advancements in the machine learning field to improve circuit design productivity? In this talk, we will talk about our researches of applying latest machine learning methods such as graph neural network (GNN) and reinforcement learning (RL) for three long-standing circuit design problems: layout parasitics prediction, circuit size optimization and layout generation. The slow convergency between schematic and layout designs is a long-standing problem for circuit design. We designed a GNN model called ParaGraph to predict layout parasitics from schematic graph. Trained on a large set of industrial analog and mixed signal circuits, it achieved higher accuracy than previous prediction methods and less than 10% final simulation error in the test circuits. We then extended this work to circuit size optimization problem based on a Bayesian optimization (BO) framework. The in-the-loop parasitics prediction based on ParaGraph ensures that the final optimized design considers parasitic effect. We also included the ParaGraph graph embeddings in the BO surrogate model, which achieved 20% better prediction accuracy and improves optimization convergence by 3.7X. Automatic layout generation is another holy-grail problem for circuit design. We designed a layout generation too l called NVCell for standard cells. It has a simulated annealing based placement engine and a genetic algorithm based routing engine. Different to previous layout approaches, it leverages RL to learn to fix complex DRC rules like a game. It can successfully generate over 90% or cells in an advanced technology node and 13% of generated cells have better area than the manual design library.
- 5. Learning to Play the Game of Macro Placement with Deep Reinforcement Learning, Young-Joon Lee, Google
-
Abstract:
We present a learning-based approach to macro placement, one of the most complex and time-consuming stages in the chip design process. Unlike prior methods, our approach has the ability to learn from past experience and improve over time. We pose macro placement as a Reinforcement Learning (RL) problem and train an agent to place the cells of a netlist onto a floorplan area. We develop a novel edge-based graph convolutional neural network architecture capable of learning rich and transferable representations of the block. Our method is capable of leveraging past experience to become both better and faster at solving new instances of the problem. Our objective is to minimize total wirelength under predefined congestion and density targets. We show that for the floorplan blocks of modern accelerator chips, our method can generate high quality macro placements in terms of power, performance, and area in under 6 hours, whereas the baseline approach requires human experts on multiple iterations and takes several weeks.