Sponsored By

Five HPC Trends to Watch in 2023

Here are some of the key drivers of high-performance computing, from the perspective of an EDA vendor.

December 28, 2022

10 Min Read
High-performance computing is taking on greater importance with the help of other trends, such as edge computing, the growth of chiplets, AI and machine learning, CaaS (Computing as a Service), and the need for sustainability.Image courtesy of Agencja Fotograficzna Caro / Alamy

Charlie Matar, Sr. Vice President, Engineering, and Rita Horner, Product Marketing Director, Synopsys Solutions Group

The concept of high-performance computing (HPC) has evolved over the last several years, both in terms of its strict definition, and, more importantly, where and how it can be used. No longer relegated to large data centers, research labs, and supercomputers, today HPC is being used in various industries for tasks such as product design, financial modeling, weather forecasting, and more. It has permeated applications even closer to our everyday lives with its ability to bring ever more powerful computer capabilities into experiences we depend on and enjoy in our homes, offices, and cars.

The fundamental reason for HPC’s growth and expansion comes down to a single word: data — as in more of it that needs to be processed, analyzed, and moved around faster than ever before. High-performance computing addresses the unrelenting cycle of more data produced, served, and consumed — whether that’s through our multi-service streaming habits at home, our increasingly connected cars, or the amount of information we require to do our jobs, monitor our health, or manage our finances.

As the demand for HPC grows, there is a lockstep connection with the need for faster, more powerful, and more efficient semiconductors. Indeed, despite the overall ebbs and flows of the chip business, consistent and sustainable growth is seen for the HPC sector. As an enabler of critical semiconductor technology behind HPC, Synopsys has a bird’s eye view of the changing requirements and new uses of high-performance computing. So, what’s ahead for high-performance computingin 2023?

Related:AI, Computing Advances Pace Nvidia’s Tech News at GTC

1. The Growth of Edge (Distributed) Computing

Without a doubt, edge computing is a key trend that is changing the computing landscape in general, but it may seem like it’s at the other end of the spectrum from traditional HPC. High-performance computing tends to get associated with large, centralized compute and storage resources—indeed the backbone of remote, cloud-based computing. By contrast, edge computing centers on the processing of data at or near the edge of a network, instead of sending it all the way back to a central location. In this way, it can offer lower latency, and in many cases, more secure operating characteristics.


But these worlds are converging, and more often edge computing is HPC, but perhaps located in other places outside a traditional data center. We have an explosion of data to thank for that. The amount of data generated at the edge is growing exponentially in terms of volume and complexity, including a vast range of internet of things (IoT) devices, fueled by the demand for smart everything. Edge computing is necessary mainly for latency and content, where the roundtrip to and back from a cloud/centralized data center would not be able to meet the required response time. In some cases, the file is too large to send to the cloud for processing or even storage.  This could include urban traffic management and related automated driving systems, precision medicine, fraud detection, business intelligence, smart cities development, and more.

Related:Cray CEO Peter Ungaro Looks at the Future of Supercomputing

We believe that edge computing will have a significant impact on HPC system vendors, cloud services providers, networking and storage suppliers, as organizations look to incorporate remote HPC capabilities with locally generated and processed data strategies. As part of that, we also expect to see an expansion of the physical footprint of HPC from a centralized delivery model to a more distributed approach that includes locations close to heavy data-generating edge locations.

From a chip design perceptive, edge computing, while still requiring optimal power, performance, and area (PPA), still carries another key priority: a reduction in latency in how these devices can process and transfer data. Design strategies must prioritize data transfer speeds and efficiency in these ICs, such as those discussed below with chiplet architectures. Of course, a chip design solution must consider all aspects of the PPA tradeoff scenarios, and provide advanced capabilities for designing and analyzing an optimized IC for any given application requirement. This includes powerful simulation and verification tools, power and thermal analysis capabilities, smart implementation of design layouts, and a range of certified IP blocks for key functions and interfaces. Going forward, we will see increasing demand for design solutions that enable lower power usage, from data centers to battery-operated IoT devices.

2. Chiplets Come of Age

One of the latest trends in HPC is the use of multi-die systems. Multi-die systems have gained favor in the high-performance computing world as a way to keep Moore’s law—slowed by device physics and economic challenges of manufacturing traditional monolithic silicon architectures—on track. In short, traditional monolithic systems on chip (SoCs) are becoming too big and costly to produce for advanced designs, and yield risk grows along with design size. The multi-die approach is attractive as a viable way to extend the PPA benefits of Moore’s law, delivering more processing capability without requiring an increase in chip area or power. They also support a heterogenous mix-and-match approach to maximize target application-optimized process technologies. Disaggregating SoC components, manufacturing them separately, and then bringing those distinct functions together in a single package results in less waste, while providing a way to rapidly create new product variants with optimized system power and performance.

While multi-die systems are well-positioned to be a fundamental enabler of HPC, design approaches must evolve to deal with new challenges. For example, die-to-die interfaces that support high bandwidth, low latency, power efficiency, and error-free performance are essential for fast, reliable data transfer. Enhanced tools, methodologies, and IP are required to deal with heterogenous integration, interconnect, and packaging issues that arise from this multi-die approach. Additional expertise and technology in areas such as advanced packaging and silicon photonics are important to drive new levels of innovation and design efficiency.

3. The Extending Reach of AI and Machine Learning


Another significant trend that cuts across all aspects of HPC is the rise of artificial intelligence (AI) and machine learning. This is an area where HPC enjoys a symbiotic relationship.

On the one hand, high-performance computers need to power AI workloads, which are ubiquitous in today’s automated data-intensive world. This is a fast growth area for HPC suppliers who see new opportunities almost everywhere there are computing needs. But to support AI workloads, compute platforms require relentless improvements in performance from the underlying hardware, putting pressure on chip designers to continuously innovate. Here, too, AI comes into play with AI-enabled design tool now being used to deal with the complexity and scale of leading-edge chip design by optimizing tedious or overly detailed tasks that can best be handled by trained AI algorithms. This not only improves overall engineering productivity but also frees up designers to focus on more innovation-oriented work.

On the other hand, high-performance computing relies on AI itself to run data centers efficiently and safely. Whether it’s monitoring the health and security of storage, servers, and networking gear, ensuring correct configurations, predicting equipment failure, or screening data for malware, AI gives a new level of insight and predictive maintenance to HPC users. AI can also be used to reduce electricity consumption and improve efficiency by optimizing heating and cooling systems, critical sustainability concerns that are top of mind for data center operators (which we’ll cover more in-depth below).

4. The Case for HPCaaS

With the vast increase in computational power required in all aspects of business, companies are looking to the value of the “as-a-service” model to satisfy their cyclical computing needs. Enter HPC as a Service (HPCaaS). In addition to peak workload efficiency, such a model offers services and support for companies that don’t have the in-house knowledge, resources, or infrastructure to take advantage of HPC via the cloud. HPCaaS makes HPC capacity easy to deploy, scale, and be more predictable from a cost perspective.

In our world of IC design, we are seeing great interest in this model for accessing the computing resources required to perform data-intensive chip design tasks. Complex HPC IC designs, consisting of multi-core architectures, are a prime example. They require increased computation, storage, and processing for design and development. Often there is a need for parallel processing of large sizes of data to achieve convergence in design and verification in these types of designs. This hosted model is being used by large semiconductor firms and startups that are developing high-performance chips for HPC, an interesting symbiotic relationship where the enablers of HPC are also depending on its capabilities.

As with other HPCaaS enterprise uses, cloud-based EDA provides scalability, flexibility, productivity, and security in the IC development process. Companies can adjust HPC access based on specific usage needs, peak design times, and distributed work structures—all without needing dedicated resource management expertise on staff. All these benefits are on top of performance throughput benefits, a crucial need for EDA tools. At Synopsys, we have seen design tasks realize substantial run-time improvements with the cloud-based model compared to on-premise hosting and management of the tools.


5. Sustainability Takes Center Stage

As rapidly as we have improved HPC capabilities and scaled the benefits across so many aspects of our lives, we are paying a price in terms of the environmental impact of these power-hungry systems. Some experts predict that data centers alone will consume between 3% to 7% of the world’s power by 2030. On a local level, many data centers face backlash and even new construction permitting obstacles because of the amount of electricity and water they need. Powering and cooling these massive computing platforms have become a sustainability hot button, and metrics such as Power Usage Effectiveness (PUE) and carbon emissions are top of mind from the board room down.

We applaud efforts by HPC companies and cloud service companies to come up with innovative ideas to address these challenges. Fundamental shifts in powering data centers through sustainable energy sources—hydro, solar, wind—are becoming more widespread. Novel approaches, such as immersive or liquid cooling techniques (including underwater data centers); re-distribution and recycling of the energy and water consumed by data centers for other uses (e.g., heating buildings); and the use of more environmentally-friendly components, material, and manufacturing approaches by the supply chain ecosystem all hold significant potential. The HPCaaS model we identified above is also inherently a more resource-efficient approach.

On our part, we know improvements in the efficiency of energy consumption and heat dissipation can be made at the chip level. For example, high-performance computing IC designs can be better optimized for power with the use of advanced, low-power design methodologies and with the use of power-optimized IP cores to reduce the overall energy consumption of both the chip and the overall system.

The chiplet trend offers another significant potential way to reduce power, not to mention reduce physical waste in manufacturing through higher yields. More power-sensitive data transport methods, such as high bandwidth memory (HBM), can also make chips, and the systems they power, more energy efficient. These are being helped along by standards and open-source efforts such as CXL, UCIe, and OCP.

In summary, the HPC sector is continuously evolving and expanding, bringing new improvements every day to our lives. But its proliferation is a double-edged sword as it creates unrelenting and performance-taxing increases in data creation and consumption, which can translate into harmful environmental effects. Solutions for these challenges are improving, and Synopsys looks forward to playing a role in keeping HPC on a sustainable, scalable growth path.


Sign up for the Design News Daily newsletter.

You May Also Like