NVIDIA has laid out the plans for its next-gen AI powerhouses, the Rubin & Rubin Ultra GPUs, along with Vera CPUs, taking the segment to new heights.
NVIDIA Rubin, Rubin Ultra GPUs & Next-Gen Vera CPUs Detailed - Superfast AI Platforms For AI Computing, Arriving in 2026-2027
This year, NVIDIA is upgrading Blackwell with its Blackwell Ultra platform, offering up to 288 GB of HBM3e memory, but next year, the Green Team is taking things to new heights with its brand-new CPU and GPU platforms, codenamed Rubin and Vera.
At GTC, NVIDIA detailed the trio of its platforms launching in 2026 and 2027. Starting with the first platform, we have the Vera Rubin system which will scale the NVL72 solutions up to NVL144. These AI platforms will be arriving in the second half of 2026 and will be featured in Obereon Racks with liquid cooling support.
Some of the primary features of the Vera Rubin NVL576 system include:
- 576 Rubin GPUs (15 EF FP4)
- 2,304 Memory Chips (150 TB @ 4600 PB/s)
- 144 NVLINK Switches (1500 PB/s)
- 1300 Trillion Transistors
- 12,672 Vera CPU Cores
- 25,344 Vera CPU Threads
- 576 ConnectX-9 NICs
- 72 Bluefield DPUs
NVIDIA Vera Rubin NVL144 System - Launching in 2H 2026
In terms of specifications, the NVIDIA Vera Rubin NVL144 platform will utilize two new chips. The Rubin GPU will make use of two Reticle-sized chips, with up to 50 PFLOPs of FP4 performance and 288 GB of next-gen HBM4 memory. These chips will be equipped alongside an 88-core Vera CPU with a custom Arm architecture, 176 threads, and up to 1.8 TB/s of NVLINK-C2C interconnect.

In terms of performance scaling, the NVIDIA Vera Rubin NVL144 platform will feature 3.6 Exaflops of FP4 inference and 1.2 Exaflops of FP8 Training capabilities, a 3.3x increase over GB300 NVL72, 13 TB/s of HBM4 memory with 75 TB of fast memory, a 60% uplift over GB300 and 2x the NVLINK and CX9 capabilities, rated at up to 260 TB/s and 28.8 TB/s, respectively.
NVIDIA Rubin Ultra NVL576 System - Launching in 2H 2027
The second platform will be arriving in the second half of 2027 and will be called Rubin Ultra. This platform will scale the NVL system from 144 to 576. The architecture for the CPU remains the same, but the Rubin Ultra GPU will feature four reticle-sized chips, offering up to 100 PFLOPS of FP4 and a total HBM4e capacity of 1 TB scattered across 16 HBM sites.

In terms of performance scaling, the NVIDIA Rubin Ultra NVL576 platform will feature 15 Exaflops of FP4 inference and 5 Exaflops of FP8 Training capabilities, a 14x increase over GB300 NVL72, 4.6 PB/s of HBM4 memory with 365 TB of fast memory, a 8x uplift over GB300 and 12x the NVLINK and 8x the CX9 capabilities, rated at up to 1.5 PB/s and 115.2 TB/s, respectively.
NVIDIA Data Center / AI GPU Roadmap
| GPU Codename | Feynman | Rubin (Ultra) | Rubin | Blackwell (Ultra) | Blackwell | Hopper | Ampere | Volta | Pascal |
|---|---|---|---|---|---|---|---|---|---|
| GPU Family | GF200? | GR300? | GR200? | GB300 | GB200/GB100 | GH200/GH100 | GA100 | GV100 | GP100 |
| GPU SKU | F200? | R300? | R200? | B300 | B100/B200 | H100/H200 | A100 | V100 | P100 |
| Memory | HBM4e/HBM5? | HBM4 | HBM4 | HBM3e | HBM3e | HBM2e/HBM3/HBM3e | HBM2e | HBM2 | HBM2 |
| Launch | 2028 | 2027 | 2026 | 2025 | 2024 | 2022-2024 | 2020-2022 | 2018 | 2016 |
Follow Wccftech on Google to get more of our news coverage in your feeds.







