AMD Shows Why High-End EPYC CPUs Are Important to Unlock the Full Potential of AI Accelerators — Without Them, Performance Suffers

Jun 12, 2025 at 04:48pm EDT
Intel Hits Back At AMD's Data Center AI Performance Claims: Says 5th Gen Xeon Faster Than AMD EPYC Turin "Zen 5" Using Proper Optimizations 1

AMD highlighted the importance of server CPUs to the data center segment in the recent Advancing AI keynote, claiming that not having the right combo can influence performance.

AMD Reveals Adopting EPYC Server CPUs Brings a 17% Higher Average Performance Across Inference Workloads

We haven't talked much about how vital server CPU offerings have been to the AI computational segment, since the processors play an essential role in training/inference scenarios apart from the AI accelerators. Over the past few quarters, Team Red has been dominantly expanding its presence in the DC segment, particularly with its EPYC server CPU offerings, which is why the firm has evolved into an important player. Now, at the Advancing AI keynote, AMD's Executive VP, Forrest Norrod, dove into how crucial it is to have a balanced CPU + GPU combo in the server segment.

Related Story AMD CEO Lisa Su Speaks Out Against Fears Surrounding AI Investments, Claiming It’s the “Right Gamble”

AMD's official compared Intel's Xeon 8592+ 5th Gen processor with its EPYC 9575F server CPU, both equipped with the Instinct MI300X AI accelerator, and showed how big a performance difference exists if clients don't have the right CPU platform onboard. Before we dive into the benchmarks, let's evaluate the fairness of CPU selection. AMD, of course, went with their newest EPYC 9005 CPU lineup, stacking it up against the Xeon 8592+, which was launched almost two years ago. Regarding on-paper specifications, both CPUs offer a 64-core/128-thread configuration, with similar TDPs; hence, the comparison was at least balanced.

The company's benchmarks shared show us that adopting the "presumably" superior EPYC 9005 platform brings in a 6% average performance uplift across multiple tests on the Llama 3.1 8B AI model, and this difference exceeds up to 17%, as the number of parameters increases. The tests mainly dive into multiple inference workloads, showing that having the right server CPU option onboard greatly impacts overall performance. The benchmarks were conducted alongside the Instinct MI300X AI accelerators, but I wonder whether optimizations played a role here.

Regardless of AMD's benchmarks, the company is clearly seeing massive adoption in the server CPU segment, as the firm's market share climbs to astonishing numbers in just a few years.

Follow Wccftech on Google to get more of our news coverage in your feeds.

Deal of the Day