NVIDIA Blackwell Ultra goes official, offering a huge scale-up in memory capacity with upgraded AI compute capabilities for data centers.
NVIDIA Blackwell Gets Bigger With "Ultra" Variant, Packs 50% More Performance & 288 GB of HBM3e Memory
The launch of Blackwell in its first iteration was hit with a few hiccups, but the company has worked its way to ensure that the supply of its latest AI powerhouse is in an even better state, providing the latest hardware and solutions to major AI and Data Center vendors. The initial B100 and B200 GPU families offer an insane amount of AI compute capabilities and the company is going to set new industry standards with the next-gen offering, Blackwell Ultra.
These B300 chips will expand upon Blackwell, offering not just increased memory densities with up to 12-Hi HBM3E stacks but also offering even more compute capabilities for faster AI. These chips will be coupled with the latest Spectrum Ultra X800 Ethernet switches (512-Radix).

Press Release: Built on the groundbreaking Blackwell architecture introduced a year ago, Blackwell Ultra includes the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HGX B300 NVL16 system. The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell’s revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper.
NVIDIA Blackwell Ultra Enables AI Reasoning
The NVIDIA GB300 NVL72 connects 72 Blackwell Ultra GPUs and 36 Arm Neoverse-based NVIDIA Grace CPUs in a rack-scale design, acting as a single massive GPU built for test-time scaling. With the NVIDIA GB300 NVL72, AI models can access the platform’s increased compute capacity to explore different solutions to problems and break down complex requests into multiple steps, resulting in higher-quality responses.

GB300 NVL72 is also expected to be available on NVIDIA DGX Cloud, an end-to-end, fully managed AI platform on leading clouds that optimizes performance with software, services and AI expertise for evolving workloads. NVIDIA DGX SuperPOD with DGX GB300 systems uses the GB300 NVL72 rack design to provide customers with a turnkey AI factory.
The NVIDIA HGX B300 NVL16 features 11x faster inference on large language models, 7x more compute and 4x larger memory compared with the Hopper generation to deliver breakthrough performance for the most complex workloads, such as AI reasoning.
NVIDIA Data Center / AI GPU Roadmap
| GPU Codename | Feynman | Rubin (Ultra) | Rubin | Blackwell (Ultra) | Blackwell | Hopper | Ampere | Volta | Pascal |
|---|---|---|---|---|---|---|---|---|---|
| GPU Family | GF200? | GR300? | GR200? | GB300 | GB200/GB100 | GH200/GH100 | GA100 | GV100 | GP100 |
| GPU SKU | F200? | R300? | R200? | B300 | B100/B200 | H100/H200 | A100 | V100 | P100 |
| Memory | HBM4e/HBM5? | HBM4 | HBM4 | HBM3e | HBM3e | HBM2e/HBM3/HBM3e | HBM2e | HBM2 | HBM2 |
| Launch | 2028 | 2027 | 2026 | 2025 | 2024 | 2022-2024 | 2020-2022 | 2018 | 2016 |
Follow Wccftech on Google to get more of our news coverage in your feeds.





