Machine learning (ML) and artificial intelligence (AI) have been taking the technology industries by storm for the past few years, from self-driving cars and acrobatic robots to a champion computer chess player. Yet in the area of hardware and systems design, there was little push to utilize these amazing technologies to enhance the design process. With the exception of integrated circuit layout tools, there was very little hardware and system design methodologies with ML and AI as their main ingredient. This was particularly true in the area of high-speed signal integrity (SI) and power integrity (PI) analysis.
Chris Cheng is a distinguished technologist at the Storage Division of Hewlett-Packard Enterprise and co-chair of the machine learning and AI track.
In 2016, the situation finally changed. Eleven companies that represent a broad spectrum of the hardware industry and three top tier universities in the US joined forces to establish the Center for Advanced Electronics through Machine Learning (CAEML or the Center). The three founding universities of CAEML were University of Illinois at Urbana-Champaign, Georgia Tech, and North Carolina State. Funding was also partially supported by the National Science Foundation. The stated goal of CAEML is to enable the fast, accurate design and verification of microelectronics circuits and systems by creating machine learning algorithms to derive models used for electronic design automation.
Five main research thrusts were established by CAEML: 1) theory and machine learning efficiency; 2) design and system optimization; 3) modeling and simulation; 4) verification; and 5) reliability and security. They represent the diverse interests of the funding companies that are leaders of system, semiconductor, electronic design automation (EDA), and government research labs.
CAEML at DesignCon
DesignCon has been pleased to allow CAEML to establish some early outlets to showcase its research and activities, from a keynote speech by CAEML director, professor Elyse Rosenbaum, to special technical panels on machine learning and artificial intelligence. I am happy to say that in the upcoming 2019 conference, machine learning and artificial intelligence have finally arrived in DesignCon as a separate and dedicated track. As chairman of the Industrial Advisory Board (IAB) of CAEML and co-chair of the DesignCon Machine Learning Track along with Scott Wedge of Synopsys, I am happy to share some of the highlights of the upcoming events around machine learning in DesignCon.
Before we go through the programs, let’s review some of the unique requirements of ML/AI for high-speed/high-performance hardware design. While the basic mathematics and tool sets of machine learning can apply to high-speed signal integrity and power integrity designs, much of the applications center around active learning, system optimization, and time series prediction. Also, the feature size of a typical problem is small to medium from a few to tens of parameters. Some of the popular algorithms for deep learning with a graphics processor unit (GPU) may or may not be applicable in this situation. Instead, Bayesian learning, surrogate modeling, and recurrent neural networks (RNN) are the prevailing methods of choice.
For the past few years, CAEML researchers have been diving deep into these new methods and applying them to problems suggested by the IAB members. Companies outside of the CAEML also applied similar technologies to solve their own SI/PI problems. Since 2016, some of the technologies finally matured to yield excellent results. The DesignCon Machine Learning Track 15 reflects some of the wonderful results these researchers have achieved.
The hardware machine learning group from Hewlett-Packard Enterprise is proud to team up with two of the CAEML center directors—professor Paul Franzon of North Carolina State University and professor Madhavan Swaminathan of Georgia Tech and their team—to offer a full-day boot camp on machine learning and artificial intelligence and their application on signal integrity (SI) and power integrity (PI). Not only will participants be able to learn the basics of ML/AI; they will also learn the unique areas mentioned above, such as Bayesian learning, surrogate models, optimization, and recurrent neural networks.
Get up to Speed on ML/AI
Because the boot camp will be held on Tuesday, a day before the main conference starts, participants who have no prior knowledge of ML/AI can be rapidly brought to a level where they can be immersed in the deeper technical aspect of the papers presented in the ML/AI track in the following two days. The boot camp will end with a hands-on application with Tektronix measurements and a recurrent neural network training exercise. After the boot camp, DesignCon technical sessions will start next with papers in the following categories:
In the area of design and system optimization, we have researchers from Georgia Tech and Cadence describing how they utilized polynomial chaos expansion to generate surrogate models for high-speed systems. My team from Hewlett-Packard Enterprise will detail accelerating 56G PAM4 link equalization optimization using the machine learning technique of Principal Component Analysis (PCA).
In modeling and simulation in signal integrity, a team from Google will detail their high-speed channel modeling method using recurrent neural networks (RNN), support vector machines (SVM), and artificial neural network (ANN). Along a similar path, the Intel team will describe their macro-modeling approach for 25/56/112Gb PAM4 signals with machine learning. To round up SI modeling with machine learning, engineers from Xilinx will detail how they use reinforced learning techniques to tune their CTLE equalizer performance models. For power integrity (PI) modeling, the Ansys team will outline their deep learning techniques to analyze timing constraints under the impact of dynamic voltage power noise. In parallel, engineers from Nvidia will detail their deep learning algorithms with deep neural network (DNN) models to predict simultaneously switching noise (SSN).
Finally, in the reliability area, a team from Samsung Electronics will describe how they use a deep neural network to model their electrostatic discharge (ESD) measurements.
From the above rich list of papers, it is clear that ML/AI have matured from the initial basic investigation described in the technical panels in earlier DesignCon years to full blown applications and deployed solutions. To highlight this progression, our technical panel at DesignCon will be focusing on actual deployed examples of ML/AI. An expert panel from system, semiconductor, and electronic design automation (EDA) business will present their deployed ML/AI examples and share their future visions of ML/AI. The panel is open to any registered attendees including the free exhibition pass.
Another free admission for all attendees is the keynote speech by Gloria Lau, head of hardware at Uber. She will discuss the hyperscale AI effort in Uber to enable self-driving and flying cars. We are looking forward to hearing about the flying cars!
As co-chair of the machine learning track at DesignCon 2019, I really look forward to the above exciting ML/AI sessions. I deeply appreciate all the hard work put into this conference by the papers’ authors, boot camp instructors, keynote speaker, and ML/AI track technical program committees. I hope you will enjoy this as much as I will, and I hope to see you at DesignCon 2019!
Chris Cheng is a distinguished technologist at the Storage Division of Hewlett-Packard Enterprise and co-chair of the machine learning and AI track. At Hewlett-Packard, he is responsible for managing all high-speed, analog, and mixed signal designs within the Storage Division. He also held senior engineering positions in Sun Microsystems, where he developed the original GTL system bus with Bill Gunning. He was a principal engineer in Intel, where he led the high-speed processor bus design team. He also was the first hardware engineer in 3PAR and guided their high-speed design effort until it was acquired by Hewlett-Packard.