⋮    ⋮  

Intel Officially Confirms Cascade Lake Advanced Performance Processors, Utilizes MCP With Up To 48 Cores, 12 Channel Memory Support – Shows First Performance Numbers Versus AMD EPYC


Intel has confirmed that they will be bringing their first multi-chip package design Xeon CPUs to the market in the form of Cascade Lake Advanced Performance or Cascade Lake-AP for short. We have been hearing reports about the new series for a while now, but Intel has now officially revealed the new Xeon lineup and also mentioned some early performance numbers.

Intel Announces Cascade Lake Advanced Performance Xeon Processors - More Cores, More Memory Lanes, Faster Clock Speeds, Optimized Cache and Spectre/Meltdown Mitigations Included

Intel isn't talking much about the Cascade Lake Advanced Performance specifications as those will be presented during the Supercomputing 2018 in early November.

AMD 3rd Gen EPYC Milan ‘EPYC 7643’ CPU With 48 Zen 3 Cores & 3.45 GHz Boost Clocks Benchmarked – Single EPYC Faster Than Dual Xeons

Today, Intel is sharing the initial details along with some performance numbers versus their own flagship Xeon, the 8180 Platinum and AMD's EPYC 7601. Intel is stating that the new Xeon class is built upon 20 years of Xeon innovation. Some of the features in the new lineup include:

  • Leadership Performance
  • Optimized Cache Hierarchy
  • Higher Frequencies
  • Security Mitigations
  • Intel Deep Learning Boost (VNNI)
  • Optimized Framework and Libraries

We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers’ system requirements. The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup once again demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers.

Lisa Spelman, vice president and general manager of Intel Xeon products and data center marketing

It looks like the new Xeon processors will be fully rounded up, not only featuring the above-mentioned technologies, but also featuring even more cores and memory channels. Intel is revealing that the Xeon Cascade Lake Advanced Performance processors will house up to 48 cores and 96 threads on a single package which will be utilizing a Multi-Chip Package (MCP).

Intel's Xeon processors currently house up to 28 cores and 56 threads so the new AP series processors will be a good boost in performance, just from the higher number of cores. In addition to that, the chip will feature support for 12 DDR4 memory channels, a huge bump from the 6 channel memory that will be featured on the Cascade Lake-SP Xeons.

Intel’s Xeon-W 26 Core, 52 Thread HEDT CPU For LGA 3647 Socket Leaks Out – Up To 4.1 GHz Clocks

The performance numbers were taken from a 2 socket server, featuring two 48 core chips which mean a total of 96 cores and 192 threads. The 12 channel memory will give a total of 24 DIMM slots and considering you can fill that up with some really dense ECC memory, we will be looking at up to 3 TB of memory support. In addition to ECC memory, the new Xeons will also support the Optane DC Persistent memory, featuring capacities of up to 512 GB. 24 of these in the 2S server would give a mind-boggling 12 Terabytes of system memory. This would deliver an unprecedented amount of memory bandwidth which is only possible through a CPU in the class of Cascade Lake-AP.

We can also tell that the platform is of a way different design compared to Purley which houses the Skylake-SP and will house the upcoming Cascade Lake-SP Xeons. We can probably be looking at a different socket which was rumored to be the BGA 5903 and hence, a different platform will be shown.

Now, coming to the official performance numbers, Intel claims that the 2S Cascade Lake-AP server is:

  • 3.4x faster than AMD EPYC 7601 (2S) in Linpack
  • 1.2x faster than Intel Xeon Scalable 8180 (2S) in Linpack
  • 1.3x faster than AMD EPYC 7601 (2S) in Stream Triad
  • 1.83x faster than Intel Xeon Scalable 8180 (2S) in Stream Triad
  • 17.0x images per second versus Intel Xeon Scalable in AI/DL inference

Following are the testing methodologies used for testing as mentioned in the presentation footnotes:

Performance results are based on testing or projections as of 6/2017 to 10/3/2018 (Stream Triad), 7/31/2018 to 10/3/2018 (LINPACK) and 7/11/2017 to 10/7/2018 (DL Inference) and may not reflect all publicly available security updates.

LINPACK: AMD EPYC 7601:  Supermicro AS-2023US-TR4 with 2 AMD EPYC 7601 (2.2GHz, 32 core) processors, SMT OFF, Turbo ON,  BIOS ver 1.1a, 4/26/2018, microcode: 0x8001227,  16x32GB DDR4-2666, 1 SSD,  Ubuntu 18.04.1 LTS (4.17.0-041700-generic Retpoline), High-Performance Linpack v2.2, compiled with Intel(R) Parallel Studio XE 2018 for Linux, Intel MPI version, AMD BLIS ver 0.4.0, Benchmark Config: Nb=232, N=168960, P=4, Q=4, Score = 1095GFs, tested by Intel as of July 31, 2018. compared to 1-node, 2-socket 48-core Cascade Lake Advanced Performance processor projections by Intel as of 10/3/2018.

Stream Triad: 1-node, 2-socket AMD EPYC 7601, tested by AMD as of June 2017 compared to 1-node, 2-socket 48-core Cascade Lake Advanced Performance processor projections by Intel as of 10/3/2018. DL Inference:

Platform: 2S Intel Xeon Platinum 8180 CPU 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).

Finally, Intel is saying that the Xeon Cascade Lake Advanced Performance processors will be launching in the first half of 2019 which also follows the earlier rumors we had heard.

Intel confirmed their other Xeon families a while ago too which would include:

  • Cascade Lake is a future Intel Xeon Scalable processor based on 14nm technology that will introduce Intel Optane DC persistent memory and a set of new AI features called Intel DL Boost. This embedded AI accelerator will speed deep learning inference workloads, with an expected 11 times faster image recognition than the current generation Intel Xeon Scalable processors when they launched in July 2017. Cascade Lake is targeted to begin shipping late this year.
  • Cooper Lake is a future Intel Xeon Scalable processor that is based on 14nm technology. Cooper Lake will introduce a new generation platform with significant performance improvements, new I/O features, new Intel DL Boost capabilities (Bfloat16) that improve AI/deep learning training performance, and additional Intel Optane DC persistent memory innovations. Cooper Lake is targeted for 2019 shipments.
  • Ice Lake is a future Intel Xeon Scalable processor based on 10nm technology that shares a common platform with Cooper Lake and is planned as a fast follow-on targeted for 2020 shipments.

It will be interesting to see how Intel packs the new Cascade Lake-AP processors considering their HCC dies to scale up to 28 cores. We may be looking at a completely exclusive die designed for the AP series processors but I guess we have to wait till the Supercomputing 2018 conference to find out more. Till then, let us know your thoughts below about the Cascade Lake-AP processors and how you think they will compare against AMD EPYC processors, especially when 7nm EPYC chips are going to launch next year.

In addition to all of the Cascade Lake Advanced Performance news, the Xeon E-2100 (1S) CPUs also hit general market availability today. They were announced a while back and are now available via Intel and their distributor partners around the globe, featuring up to 6 cores, 12 threads, and faster memory support.

Intel Xeon SP Families (Preliminary):

Family BrandingSkylake-SPCascade Lake-SP/APCooper Lake-SPIce Lake-SPSapphire RapidsEmerald RapidsGranite RapidsDiamond Rapids
Process Node14nm+14nm++14nm++10nm+Intel 7Intel 7Intel 3Intel 3?
Platform NameIntel PurleyIntel PurleyIntel Cedar IslandIntel WhitleyIntel Eagle StreamIntel Eagle StreamIntel Mountain Stream
Intel Birch Stream
Intel Mountain Stream
Intel Birch Stream
Core ArchitectureSkylakeCascade LakeCascade LakeSunny CoveGolden CoveRaptor CoveRedwood Cove?Lion Cove?
IPC Improvement (Vs Prev Gen)10%0%0%20%19%8%?35%?39%?
MCP (Multi-Chip Package) SKUsNoYesNoNoYesYesTBD (Possibly Yes)TBD (Possibly Yes)
SocketLGA 3647LGA 3647LGA 4189LGA 4189LGA 4677LGA 4677TBDTBD
Max Core CountUp To 28Up To 28Up To 28Up To 40Up To 56Up To 64?Up To 120?Up To 144?
Max Thread CountUp To 56Up To 56Up To 56Up To 80Up To 112Up To 128?Up To 240?Up To 288?
Max L3 Cache38.5 MB L338.5 MB L338.5 MB L360 MB L3105 MB L3120 MB L3?240 MB L3?288 MB L3?
Vector EnginesAVX-512/FMA2AVX-512/FMA2AVX-512/FMA2AVX-512/FMA2AVX-512/FMA2AVX-512/FMA2AVX-1024/FMA3?AVX-1024/FMA3?
Memory SupportDDR4-2666 6-ChannelDDR4-2933 6-ChannelUp To 6-Channel DDR4-3200Up To 8-Channel DDR4-3200Up To 8-Channel DDR5-4800Up To 8-Channel DDR5-5600?Up To 12-Channel DDR5-6400?Up To 12-Channel DDR6-7200?
PCIe Gen SupportPCIe 3.0 (48 Lanes)PCIe 3.0 (48 Lanes)PCIe 3.0 (48 Lanes)PCIe 4.0 (64 Lanes)PCIe 5.0 (80 lanes)PCIe 5.0 (80 Lanes)PCIe 6.0 (128 Lanes)?PCIe 6.0 (128 Lanes)?
TDP Range (PL1)140W-205W165W-205W150W-250W105-270WUp To 350WUp To 375W?Up To 400W?Up To 425W?
3D Xpoint Optane DIMMN/AApache PassBarlow PassBarlow PassCrow PassCrow Pass?Donahue Pass?Donahue Pass?
CompetitionAMD EPYC Naples 14nmAMD EPYC Rome 7nmAMD EPYC Rome 7nmAMD EPYC Milan 7nm+AMD EPYC Genoa ~5nmAMD Next-Gen EPYC (Post Genoa)AMD Next-Gen EPYC (Post Genoa)AMD Next-Gen EPYC (Post Genoa)