Intel, Nvidia Primed for Heavyweight Battle in AI

While one of the leading semiconductor suppliers is pulling out all stops to capture AI applications, a games-turned-AI chipmaker is not letting up either.

Spencer Chin, Senior Editor

April 1, 2024

5 Min Read
Intel, Nvidia duking it out in AI.
Two heavyweights in the electronics industry─Intel and Nvidia─are primed for a showdown to capture the rapidly-pgrowing AI market.Burazin/ The Image Bank/via getty images

At a Glance

  • Nvidia has not let up in its quest to be the leader in AI processors
  • Intel has intensified its AI efforts and plans to increase production of AI chips in new and upgraded plants.

But given the boom in artificial intelligence and machine learning in the past few years, Intel has another company to worry about: Nvidia. The semiconductor supplier which earned its reputation designing high-speed chips for gaming and graphics has made substantial investments in GPUs for artificial intelligence and machine learning, to the point where its technology announcements overwhelmingly revolve around AI and its applications. That was the case with this year’s Nvidia GTC Summit.

At GTC, Nvidia made a slew of announcements, headed by the unveiling of the Blackwell GPU architecture, which is expected to expedite not only generative AI but also data processing, engineering simulation, electronic design automation, computer-aided design, and quantum computing as well as generative AI. According to Nvidia, companies expected to adopt the Blackwell architecture include Amazon Web Services, Dell Technologies, Google, Meta. Microsoft, OpenAi, Oracle, Tesla, and Xai.

The architecture centers around  NVIDIA DGX SuperPOD™ powered by NVIDIA GB200 Grace Blackwell Superchips — to process trillion-parameter models with constant uptime for superscale generative AI training and inference workloads. The system features a liquid-cooled rack-scale architecture and is built with NVIDIA DGX™ GB200 systems. It provides 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory.

Related:Nvidia Bolsters Its AI Efforts At GTC

nvidiagraceblackwelldgx-superpod-with-dgx-GB200-systems_mid.png

While Nvidia’s technology appears to have given the company a leg up for now, the company needs to keep innovating if it wants to retain its current momentum, according to David Nicholson, Chief Research Officer at The Futurum Group. “When Nvidia tells us to think about inference as "generation," they say that inference does not represent any home field advantage to legacy CPU vendors like Intel and AMD. That remains to be seen. The headwind for Nvidia will be the cost to complete inference tasks if Intel, AMD, and others can demonstrate that, especially at the edge, they can deliver more bang for your buck.”

Catching Up

While Nvidia appears to have a clear leg on its competitors to capture the AI market, Intel cannot be counted out. The semiconductor giant has stepped up its emphasis in the area over the past year, particularly as the company recovers from a prolonged period of stagnant sales and earnings which occurred once the COVID-19 pandemic ended. Intel had ridden a wave of momentum for its processors as demand for PCs skyrocketed when people worked at home during the pandemic.

Related:Can AI Turn Around the Fortunes of Chip Suppliers?

Intel recently may have received a boost in its accelerated chip development efforts. In a press announcement, the U.S. Department of Commerce has proposed up to $8.5 billion in direct funding to Intel under the CHIPS and Science Act, which would help advance Intel’s plant building or expansion efforts in onshore locations including Arizona, New Mexico, Oregon, and Ohio. Intel’s investments are slated to create more than 10,000 company jobs and nearly 20,000 construction jobs, as well as support more than 50,000 jobs through suppliers and supporting industries.

Intel would also benefit from a U.S. Treasury Department Investment Tax Credit (ITC) of up to 25% on more than $100 billion in qualified investments and eligibility for federal loans up to $11 billion.

Several fast processors have been introduced by Intel in recent months for AI applications. At the recent Mobile World Congress in Barcelona, Intel also announced an Edge AI platform for scaling AI applications. The open, modular platform will enable enterprises to purchase a complete AI solution or build their own in existing environments. Enterprise developers can build edge-native AI applications on new or existing infrastructure, and they can manage edge solutions end-to-end for their specific use cases. The platform provides infrastructure management and AI application development capabilities that can integrate into existing software stacks via open standards.

ML Commons, an AI benchmarking organization, recently published results of the industry-standard MLPerf v4.0 benchmark for inference. Intel’s results for Intel® Gaudi® 2 accelerators and 5th Gen Intel® Xeon® Scalable processors with Intel® Advanced Matrix. The results showed that the Intel Gaudi 2 AI accelerator remains the only benchmarked alternative to Nvidia H100 for generative AI (GenAI) performance. Intel’s 5th Gen Xeon results improved by an average of 1.42x compared with 4th Gen Intel® Xeon® processors’ results in MLPerf Inference v3.1.

newsroom-intel-5thgen-xeon.jpg.rendition.intel.web.1648.927.jpg

Still, Intel has a formidable task ahead. The Futurum Group's David Nicholson said, “I am concerned about what has happened...or NOT happened...with the "Chips Act" funding that was assumed would quickly follow Intel's Foundry Services announcements. There seems to be some restating of intentions as far as fabrication facility construction is concerned. Having said this, "AI chips" include GPUs not made by Nvidia as well as CPUs not made by Nvidia, so I believe that Intel will benefit from the AI revolution, while Nvidia continues to increase the gap between themselves and Intel. Intel's partnerships are critical to their continued success, so I consider them table stakes.”

AMD Still a Factor?

Lurking in the background at the moment is AMD, which has been unveiling processors for AI applications.

In February, AMD unveiled its Embedded+ architecture, which combines embedded processors with adaptive SoCs to accelerate time-to-market for Edge AI applications. Embedded+ also allows system designers to choose from an ecosystem of ODM board offerings based on the Embedded+ architecture and scale their product portfolios to deliver performance and power profiles best suited to customers’ target applications.

Futurum’s David Nicholson believes a dynamic has occurred in the PC market, where OEMs don’t necessarily align themselves with one brand could help AMD if their products pass muster for design engineers.

"As AMD rolls out their latest CPU and GPU technology, I believe end users will be less brand-sensitive than in the past. It will be an even more dispassionate evaluation based on "tasks per dollar." So this could translate into Intel being squeezed between Nvidia and AMD as opposed to how we have viewed Intel's competition with AMD in the past. The bottom line is that this is an opportunity for AMD."

About the Author(s)

Spencer Chin

Senior Editor, Design News

Spencer Chin is a Senior Editor for Design News, covering the electronics beat, which includes semiconductors, components, power, embedded systems, artificial intelligence, augmented and virtual reality, and other related subjects. He is always open to ideas for coverage. Spencer has spent many years covering electronics for brands including Electronic Products, Electronic Buyers News, EE Times, Power Electronics, and electronics360. You can reach him at [email protected] or follow him at @spencerchin.

Sign up for the Design News Daily newsletter.

You May Also Like