NVIDIA Next-Gen Data Center & AI GPU Roadmap Unveils GB200 & GX100 Chips

Hassan Mujtaba
Image Source: Wccftech

NVIDIA has officially confirmed its next-gen GB200 & GX200 Data Center & AI GPU accelerators which are planned for 2024 and 2025, respectively.

NVIDIA Is Going All In On The AI Craze, Roadmap Unveils Next-Gen GB200 & GX200 GPUs For Data Centers

During a recent Investor Presentation held earlier this month, NVIDIA unveiled its accelerated computing vision which mainly talks about how AI has changed the modern data center & how NVIDIA is offering the hardware and software capabilities to drive these new workloads.

Related Story NVIDIA Reportedly Halts Bundling VRAM Chips With GPU Dies For Board Partners

For the past two years, NVIDIA has relied on its Hopper H100 and Ampere A100 GPUs to serve the needs of AI & HPC customers worldwide, collaborating with various partners, but all of that is about to change in 2024 with the arrival of Blackwell. NVIDIA saw a big boost to its data center and overall company revenue thanks to the AI craze and it looks like that train is going full steam ahead as the green team is aiming to launch two brand new GPU families by 2025.

Image Source: NVIDIA

The first of these new AI/HPC GPU families from NVIDIA is going to be Blackwell, named after David Harold Blackwell (1919-2010). The GPU will be the successor to the GH200 Hopper series & will use the B100 chip. The company plans on offering various products including GB200NVL (NVLINK), the standard GB200, and the B40 for visual compute acceleration. The next-gen lineup is expected to be unveiled at the next GTC (2024) followed by a launch sometime later in 2024.

Current rumors estimate that NVIDIA will be utilizing the TSMC 3nm process node for producing its Blackwell GPUs and the first customers will be delivered the chips by the end of 2024 (Q4). The GPU is also expected to be the first HPC/AI accelerator to utilize a chiplet design and will be competing with AMD's Instinct MI300 accelerator which is also going to be a big deal within the AI space as the red team has touted it to be.

The other chip that has been disclosed is the GX200 and this one is the follow-up to Blackwell with a launch scheduled for 2025. Now NVIDIA has been following a two-year cadence between its AI & HPC products so it is likely that we might only see an announcement of the chip by 2025 with actual units to commence shipments by 2026.

The lineup will be based on the X100 GPU and will include a GX200 lineup of products and a separate X40 lineup for Enterprise customers. NVIDIA is known to name its GPUs after well-known scientists and it already uses the Xavier codename for its Jetson series so we can expect a different scientist name for the X100 series. Besides that, there's little that we know about the X100 GPUs but it is much better than the Hopper-Next codenames that NVIDIA is using in prior roadmaps.

NVIDIA also plans to deliver major "doubling" upgrades on its Quantum and Spectrum-X with new Bluefield and Spectrum products, offering up to 800 Gb/s transfer speeds by 2024 and up to 1600 Gb/s transfer speeds by 2025. These new networking and interconnect interfaces will also help the HPC / AI segment a lot in achieving the required performance.

NVIDIA Data Center / AI GPU Roadmap

GPU CodenameFeynmanRubin (Ultra)RubinBlackwell (Ultra)BlackwellHopperAmpereVoltaPascal
GPU FamilyGF200?GR300?GR200?GB300GB200/GB100GH200/GH100GA100GV100GP100
GPU SKUF200?R300?R200?B300B100/B200H100/H200A100V100P100
MemoryHBM4e/HBM5?HBM4HBM4HBM3eHBM3eHBM2e/HBM3/HBM3eHBM2eHBM2HBM2
Launch202820272026202520242022-20242020-202220182016

News Source: SemiAnalysis

Follow Wccftech on Google to get more of our news coverage in your feeds.

Button