AMD EPYC 7000 Series Server Processors Officially Launched – Zen “Zeppelin” Based MCM With Up To 32 Cores, 64 Threads, 128 PCIe Lanes and Aimed at Intel’s Xeon
The day which AMD has waited so long for has finally arrived. Today, AMD officially launches their EPYC 7000 series processors. The new processors bring AMD back into the server market which had not seen an AMD server chip in a long time, but now, AMD is back and in an EPYC way.
AMD EPYC 7000 Series Server Processors Mark an EPYC Return To The Server Market – Up To 32 Cores, 64 Threads, 128 PCIe Lanes and Beefy Specs Aimed at Intel’s Xeon Lineup
The EPYC server processors were unveiled at AMD’s Financial Analyst Day along with other tech announcements. Looking at the server market, we can see that Intel remains dominant in server segments with a tight grip due to their Xeon processors.
“With our EPYC family of processors, AMD is delivering industry-leading performance on critical enterprise, cloud, and machine intelligence workloads,” said Lisa Su, president and CEO, AMD. “EPYC processors offer uncompromising performance for single-socket systems while scaling dual-socket server performance to new heights, outperforming the competition at every price point. We are proud to bring choice and innovation back to the datacenter with the strong support of our global ecosystem partners.” via AMD
AMD on the other hand used to offer a range of Opteron series processors and had a very decent share, but since Bulldozer, that share has dropped to 0%. After more than 6 years and countless engineering spent in designing both Zen and EPYC, the day is finally here that AMD will be once again offering a competitive solution that is not only great but disrupts Intel’s hold in the server market.
AMD is targeting both single (1P) and dual (2P) socket systems which make up for over 90% of the server market share. The processor lineup is based on the foundation of AMD’s Zen core, utilizing a multi-chip package design. The chip is based on multiple Zeppelin dies that are fused with 8 Zen cores. AMD uses up to four Zeppelin dies on their flagship 32 core, 64 thread processor and that offers some disruptive numbers in terms of core count, thread count and I/O.
The AMD EPYC 7000 Series Family Detailed – EPYC 7601 Is The Flagship
We will get back to the features in a bit but before we do, let’s take a look at the lineup itself. The AMD EPYC family will be branded as the “EPYC 7000” series and features 12 models. From the 12, only three models are specifically designed to be compatible for single socket solutions while the rest can operate in 2P platforms.
The fastest of the EPYC 7000 series processor is the EPYC 7601 which comes with 32 cores and 64 threads. The clock speeds are maintained at 2.2 GHz base and 3.2 GHz boost frequencies. The TDP is maintained at 180W which is what we are going to get on Ryzen Threadripper processors too. In the single-socket lineup, the fastest chip is EPYC 7551P which also has 32 cores and 64 threads. It’s clocked at 2.0 GHz base and 3.0 GHz boost and features a TDP of 180W.
Both processors have 64.0 MB of L3 cache and feature AMD’s Infinity Fabric for faster chip to chip interconnect. The Infinity Fabric interconnect is a series of high-performance, scalable links that increases scaling in chips, providing more performance, improving product yields and reducing product cost. Rest of the lineup is detailed below:
AMD EPYC 7000 Series Server Lineup:
|CPU Name||CPU Cores||CPU Threads||L3 Cache||Base Clock||Boost Clock||TDP||Pricing Range||Platform Support|
|EPYC 7601||32||64||64 MB||2.2 GHz||3.2 GHz||180W||>4000 USD||AMD 2P|
|EPYC 7551||32||64||64 MB||2.0 GHz||3.0 GHz||180W||>3200 USD||AMD 2P|
|EPYC 7501||32||64||64 MB||2.0 GHz||3.0 GHz||155/170W||>2700 USD||AMD 2P|
|EPYC 7451||24||48||48 MB||2.3 GHz||3.2 GHz||180W||>2400 USD||AMD 2P|
|EPYC 7401||24||48||48 MB||2.0 GHz||3.0 GHz||155/170W||>1700 USD||AMD 2P|
|EPYC 7351||16||32||32 MB||2.4 GHz||2.9 GHz||155/170W||>1100 USD||AMD 2P|
|EPYC 7301||16||32||32 MB||2.2 GHz||2.7 GHz||155/170W||>800 USD||AMD 2P|
|EPYC 7281||16||32||32 MB||2.1 GHz||2.7 GHz||155/170W||>600 USD||AMD 2P|
|EPYC 7251||8||16||16 MB||2.1 GHz||2.9 GHz||120W||>400 USD||AMD 2P|
|EPYC 7551P||32||64||64 MB||2.0 GHz||3.0 GHz||180W||>2000 USD||AMD 1P|
|EPYC 7401P||24||48||48 MB||2.0 GHz||3.0 GHz||155/170W||>1000 USD||AMD 1P|
|EPYC 7351P||16||32||32 MB||2.4 GHz||2.9 GHz||155/170W||>700 USD||AMD 1P|
AMD EPYC 7000 Series Processors Chip Shot (Image Credits: Computerbase):
AMD EPYC 7000 Series Features, Performance and Platform Detailed
Coming to the platform itself, EPYC will be shipping with processors that feature up to 32 Zen cores as detailed above. The platform will support 8 memory channels and 128 lanes of high-bandwidth I/O. Each EPYC processor can support 16 DIMMs for up to 2 TB memory support and a 2P or dual socket platform will feature 64 cores, 4 TB memory support and 128 PCI Express lanes.
AMD is not only going to take the fight to Intel in the single socket platform but also aims to disrupt the 2 socket market. An EPYC 1S platform will be able to offer up to 50 percent better processor performance compared to an Intel 2S solution. It will also consume less power. Performance and pricing comparison of various AMD EPYC 7000 series processors versus the Broadwell-EP counterparts from Intel is provided in the images below:
AMD EPYC processors set several performance records, including:
- AMD EPYC 7601-based system scored 2360 on SPECint_rate2006, higher than any other two-socket system score.
- AMD EPYC 7601-based system scored 1200 on SPECint_rate2006, higher than any other mainstream one-socket x86-based system score.
- AMD EPYC 7601-based system scored 943 on SPECfp_rate2006, higher than any other one-socket system score.
As far as these results are concnerned, we should take the marketing claims with a slight pinch of salt as mentioned by many independent tech sites including Tom’s Hardware and The Tech Report who had the following to say on this matter:
AMD provided some basic benchmarks, seen in the slides above, that compare its processors to the nearest Intel comparables. The price and performance breakdown chart is perhaps the most interesting, as it indicates much higher performance (as measured by SPECint_rate_base2006), at every price point. It bears mentioning that Intel publicly posts its SPEC benchmark data, and AMD’s endnotes indicates that it reduced the scores used for these calculations by 46%. AMD justified this adjustment because they feel the Intel C++ compiler provides an unfair advantage in the benchmark. There is a notable advantage to the compiler, but most predict it is in the 20% range, so AMD’s adjustments appear aggressive. We should take these price and performance comparisons with the necessary skepticism and instead rely upon third-party data as it emerges.
AMD EPYC processors also fuse infinity Fabric die-to-die and socket-to-socket interconnect which allows for a fully connected and coherent link for the chip to communicate with the different dies and even the sockets. In 1P solutions, the Infinity Fabric interconnect will offer 42 GB/s of band-width (bi-directional) at low power, low latency.
In dual socket (2P) solutions, the same fabric will allow socket to socket communication at 32 GB/s bi-directional bandwidth speeds between the sockets and a total of 4 links with be initiated between sockets at a time so that each CPU die is connected to the peer die on second socket and so that the chip has to hop only twice, reducing latency. This means that the 4 links will be able to carry a total of 152 GB/s bandwidth between the two sockets while consuming 10.9W (socket) and 5.3W (per processor). The entire silicon die can consume 2.37W on average when it’s configured at full speed. Detailed information on the latency, bandwidth and pings between the die and socket are mentioned by Anandtech:
Overall, AMD is rating their die-to-die bandwidth to be the same as dual channel memory bandwidth, rated at 170 GB/s (4 x 42.6).
The benchmarks showcased by AMD are definitely impressive as EPYC is able to beat two socket Xeon configurations at similar or cheaper prices with better efficiency and more I/O capabilities. All processors of the EPYC 7000 series stack show disruptive performance results but it would be even better if we had gotten a comparison with Skylake-SP which is the actual competitor to the Naples / EPYC platform.
AMD says that there’s a 14% advantage of cores per rack that ship with their Naples platform compared to Intel’s. On Intel, a singular rack will consist of 4704 cores while AMD’s Zen based Naples Rack will ship with 5376 cores.
Upcoming Intel and AMD Server Platform Comparison:
|Intel Xeon E5 Bronze / Silver||Intel Xeon E7 Gold / Platinum||AMD Naples Platform (2P)|
|Family Branding||Skylake-SP||Skylake-SP||AMD EPYC|
|PCH||Lewisburg PCH||Lewisburg PCH||SOC|
|Socket||Socket P (LGA 3647)||Socket P (LGA 3647)||SP3 LGA socket|
|Max Core Count||Up To 26||Up To 32||Up To 32|
|Max Thread Count||Up To 52||Up To 64||Up To 64|
|Max L3 Cache||35.75 MB L3||38.5 MB L3||64 MB L3|
|DDR4 Memory Support||6-Channel DDR4||6-Channel DDR4||8-Channel DDR4|
There’s also 14% advantage in VM (Virtual Machines) per socket. Memory bandwidth sees a 33% advantage as AMD has 8 channels while Intel’s Purley platform is configured for 6 channels per socket. Intel platform also supports 24 DIMMs while AMD can support up to 32 DIMMs. AMD is also suggesting highly competitive price to performance ratios on Naples processors that will give them a clear edge in the enterprise market.
A Single AMD EPYC Processor Powers 24 3.2 TB NVMe Drives at Full PCIe x4 Speeds – Has 32 PCIe Lanes Still Left For Use
In one of the power demonstrations, AMD used a single AMD EPYC 7601 processor with 128 PCIe lanes to run a total of 24 NVMe drives, each with a capacity of 3.2 TB which totals to 76.8 TB on a full PCIe 3.0 x4 link. The system still had 32 PCIe lanes left for use which is one of the key features of AMD’s EPYC processors.
During a 128K random benchmark demo, the system scored 9.1 million (9,178,000) IOPS read, 7.1 million (7,111,000) IOPS write and delivered 53.3 GB/s of storage bandwidth which is impressive feat for the server chips. All of this was achieved on a single socket system which shows that AMD EPYC is even optimized for datacenter tasks where large amounts of data needs to be managed efficiently.
EPYC Product Overview
- A highly scalable System on Chip (SoC) design ranging from 8-core to 32-core, supporting two high-performance threads per core
- Industry-leading memory bandwidth across the line-up, with 8 channels of memory on every EPYC device. In a two-socket server, support for up to 32 DIMMS of DDR4 on 16 memory channels, delivering up to 4 terabytes of total memory capacity
- Unprecedented support for integrated, high-speed I/O with 128 lanes of PCIe 3 on every product
- A highly-optimized cache structure for high-performance, energy efficient compute
- AMD Infinity Fabric coherent interconnect linking EPYC CPUs in a two-socket system
- Dedicated security hardware
AMD EPYC Security Features:
AMD has a full ecosystem ready for EPYC with partners such as HPE, Dell, EMC, Tyan Supermicro, ASUS, Lenovo and many more on board. A whole stack of critical software systems and developers tools will also be ready for EPYC, optimized for various workloads.
“The EPYC processor represents a paradigm shift in computing and will usher in a new era for the IT ecosystem,” said Antonio Neri, EVP and general manager Enterprise Group, HPE. “Starting with the Cloudline CL3150 and expanding into other product lines later this year, the arrival of EPYC in HPE systems will be welcomed by customers who are eager to deploy the performance and innovation EPYC delivers.” via AMD
While AMD EPYC is launched and being compared to Intel’s Broadwell-EP chips from last year, it should be noted that Intel is also releasing their Skylake-SP platform in 2017 followed by Cascade Lake-SP in 2018. AMD has plans to introduce their Rome server lineup based on the new Zen 2 cores later in 2018 which will feature a 48 core chip codenamed “Starship” as detailed in an earlier leaked road map. More details on that here.