AMD Reaffirms EPYC Bergamo CPUs In 1H 2023, Instinct MI300 APUs In 2H 2023

Feb 2, 2023 at 01:55pm EST

AMD reaffirmed the launch plans of its next-generation EPYC Bergamo CPUs and the Instinct MI300 APUs which launch this year.

AMD EPYC Bergamo CPUs & Instinct MI300 APUs To Power Next-Gen Data Centers This Year

AMD already got a lead on Intel with its EPYC Genoa CPUs that launched months ahead of the Xeon Sapphire Rapids CPUs. Fast forward to 2023, and AMD is planning to launch four brand new data-center products which include Genoa-X, Bergamo, Siena, and Instinct MI300. During its recent Q4 2022 earnings call, AMD once again confirmed that its EPYC Bergamo CPUs will launch in 1H 2023 followed by Instinct MI300 APUs in 2H 2023.

Related Story AMD Preps More Radeon AI PRO R9000 “RDNA 4” GPUs: R9700S & R9600D Spotted

AMD Instinct MI300 In 2H 2023 - Powering 2+ Exaflops El Capitan Supercomputer

The AMD Instinct MI300 will be a multi-chip and multi-IP Instinct accelerator that not only features the next-gen CDNA 3 GPU cores but is also equipped with the next-generation Zen 4 CPU cores.

The latest specifications that were unveiled for the AMD Instinct MI300 accelerator confirm that this exascale APU is going to be a monster of a chiplet design. The CPU will encompass several 5nm 3D chiplet packages, all combining to house an insane 146 Billion transistors. Those transistors include various core IPs, memory interfaces, interconnects, and much more. The CDNA 3 architecture is the fundamental DNA of the Instinct MI300 but the APU also comes with a total of 24 Zen 4 Data Center CPU cores & 128 GB of the next-generation HBM3 memory running in 8192-bit wide bus config that is truly mind-blowing.

AMD will be utilizing both 5nm and 6nm process nodes for its Instinct MI300 'CDNA 3' APUs. The chip will be outfitted with the next generation of Infinity Cache and feature the 4th Gen Infinity architecture which enables CXL 3.0 ecosystem support. The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x performance per watt uplift over CDNA 2 which is massive. AMD is also projecting over 8x the AI performance versus the CDNA 2-based Instinct MI250X accelerators. The CDNA 3 GPU's UMAA will connect the CPU and GPU to a unified HBM memory package, eliminating redundant memory copies while delivering low TCO.

In terms of when - we've talked before about sort of our Data Center GPU ambitions and the opportunity there. We see it as a large opportunity. As we go into the second half of the year and launch MI300, sort of the first user of MI300 will be the supercomputers or El Capitan, but we're working closely with some large cloud vendors as well to qualify MI300 in AI workloads. And we should expect that to be more of a meaningful contributor in 2024. So lots of focus on just a huge opportunity, lots of investments in software as well to bring the ecosystem with us.

AMD CEO, Lisa Su (Q4 2022 Earnings Call)

AMD EPYC Bergamo In 1H 2023 - Topping Up The Core Count To 128 With Zen 4C

The AMD EPYC Bergamo chips will be featuring up to 128 cores and will be aiming at the HBM-powered Xeon chips along with server products from Apple, Amazon, and Google with higher core counts (ARM architecture). Both Genoa and Bergamo will utilize the same SP5 socket and the main difference is that Genoa is optimized for higher clocks while Bergamo is optimized around higher-throughput workloads.

Bergamo will launch in the first half of the year. We are on track for the Bergamo launch, and you'll see that become a larger contributor in the second half. So as we think about the Zen 4 ramp and the crossover to our Zen 3 ramp, it should be towards the end of the year, sort of in the fourth quarter, that you would see a crossover of sort of Zen 4 versus Zen 3, if that helps you.

AMD CEO, Lisa Su (Q4 2022 Earnings Call)

It is stated that AMD's EPYC Bergamo CPUs will be arriving in the first half of 2023 and will use the same code as Genoa and also run like Genoa but the code is half the size of Genoa. The CPUs are specifically mentioned to compete against the likes of AWS's Graviton CPUs and other ARM-based solutions where peak frequency isn't a requirement but throughput through the number of cores is. One workload example for Bergamo would be Java where the extra cores can definitely come in handy. Following Bergamo will be the TCO-optimized Siena lineup for the SP6 platform which will play a crucial role in expanding AMD's TAM growth in the server segment.

AMD's EPYC & Instinct chips are expected to push the company's market share holding to 30% and possibly even breach it by the end of this year. The company really has a strong roadmap laid out in the server market segment and we can't wait to see how things evolve in the coming quarters.

AMD EPYC CPU Families:

Family NameAMD EPYC VeranoAMD EPYC VeniceAMD EPYC Turin-XAMD EPYC Turin-DenseAMD EPYC TurinAMD EPYC SienaAMD EPYC BergamoAMD EPYC Genoa-XAMD EPYC GenoaAMD EPYC Milan-XAMD EPYC MilanAMD EPYC RomeAMD EPYC Naples
Family BrandingEPYC 9007EPYC 9006EPYC 9005EPYC 9005EPYC 9005EPYC 8004EPYC 9004EPYC 9004EPYC 9004EPYC 7004EPYC 7003EPYC 7002EPYC 7001
Family Launch2027202620252025202420232023202320222022202120192017
CPU ArchitectureZen 7Zen 6Zen 5Zen 5CZen 5Zen 4Zen 4CZen 4 V-CacheZen 4Zen 3Zen 3Zen 2Zen 1
Process NodeTBD2nm TSMC4nm TSMC3nm TSMC4nm TSMC5nm TSMC4nm TSMC5nm TSMC5nm TSMC7nm TSMC7nm TSMC7nm TSMC14nm GloFo
Platform NameSP7SP7SP5SP5SP5SP6SP5SP5SP5SP3SP3SP3SP3
SocketTBDTBDLGA 6096 (SP5)LGA 6096 (SP5)LGA 6096LGA 4844LGA 6096LGA 6096LGA 6096LGA 4094LGA 4094LGA 4094LGA 4094
Max Core CountTBD9612819212864128969664646432
Max Thread CountTBD19225638425612825619219212812812864
Max L3 CacheTBDTBD1536 MB384 MB384 MB256 MB256 MB1152 MB384 MB768 MB256 MB256 MB64 MB
Chiplet DesignTBD8 CCD's (1 CCX per CCD) + 2 IOD?16 CCD's (1CCX per CCD) + 1 IOD12 CCD's (1CCX per CCD) + 1 IOD16 CCD's (1CCX per CCD) + 1 IOD8 CCD's (1CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (2 CCX's per CCD) + 1 IOD4 CCD's (2 CCX's per CCD)
Memory SupportTBDDDR5-12800DDR5-6000?DDR5-6400DDR5-6400DDR5-5200DDR5-5600DDR5-4800DDR5-4800DDR4-3200DDR4-3200DDR4-3200DDR4-2666
Memory ChannelsTBD16-Channel (SP7)12 Channel (SP5)12 Channel12 Channel6-Channel12 Channel12 Channel12 Channel8 Channel8 Channel8 Channel8 Channel
PCIe Gen SupportTBD128-192 PCIe Gen 6TBD128 PCIe Gen 5128 PCIe Gen 596 Gen 5128 Gen 5128 Gen 5128 Gen 5128 Gen 4128 Gen 4128 Gen 464 Gen 3
TDP (Max)TBD~600W500W (cTDP 600W)500W (cTDP 450-500W)400W (cDP 320-400W)70-225W320W (cTDP 400W)400W400W280W280W280W200W

Follow Wccftech on Google to get more of our news coverage in your feeds.

Deal of the Day