⋮    ⋮  

Mysterious NVIDIA ‘GPU-N’ Could Be Next-Gen Hopper GH100 In Disguise With 134 SMs, 8576 Cores & 2.68 TB/s Bandwidth, Simulated Performance Benchmarks Shown

Submit

A mysterious NVIDIA GPU known as GPU-N which could possibly be the first look at the next-gen Hopper GH100 chip has been revealed in a new research paper published by the green team (as discovered by Twitter user, Redfire).

NVIDIA Research Paper Talks 'GPU-N' With MCM Design & 8576 Cores, Could This Be Next-Gen Hopper GH100?

The research paper 'GPU Domain Specialization via Composable On-Package Architecture' talks about a next-generation GPU design as the most practical solution for maximizing low-precision matrix math throughput to boost Deep Learning performance. The 'GPU-N' and its respective COPA designs have been discussed along with their possible specifications and simulated performance results.

Crypto Fall Results In NVIDIA GeForce & AMD Radeon Graphics Card Prices To See A Slight Drop, Availability Better Than Last Year

The 'GPU-N' is said to feature 134 SM units (vs 104 SM units of A100). This makes up a total of 8576 cores or a 24% increase over the current Ampere A100 solution. The chip has been measured at 1.4 GHz, the same theoretical clock speed of the Ampere A100 and Volta V100 (not to be confused as the final clocks). Other specifications include a 60 MB L2 cache, a 50% increase over Ampere A100, and a DRAM bandwidth of 2.68 TB/s that can scale up to 6.3 TB/s. The HBM2e DRAM capacity is 100 GB and can be expanded up to 233 GB with the COPA implementations. It is configured around a 6144-bit bus interface at clock speeds of 3.5 Gbps.

Configuration NVIDIA V100 NVIDIA A100 GPU-N
SMs 80 108 134
GPU frequency (GHz) 1.4 1.4 1.4
FP32 (TFLOPS) 15.7 19.5 24.2
FP16 (TFLOPS) 125 312 779
L2 cache (MB) 6 40 60
DRAM BW (GB/s) 900 1,555 2,687
DRAM Capacity (GB) 16 40 100

Coming to the performance numbers, the 'GPU-N' (presumably Hopper GH100) produces 24.2 TFLOPs of FP32 (24% increase over A100) and 779 TFLOPs FP16 (2.5x increase over A100) which sounds really close to the 3x gains that were rumored for GH100 over A100. Compared to AMD's CDNA 2 'Aldebaran' GPU on the Instinct MI250X accelerator, the FP32 performance is less than half (95.7 TFLOPs vs 24.2 TFLOPs) but the FP16 performance is 2.15x higher.

From previous information, we know that NVIDIA's H100 accelerator would be based on an MCM solution and utilize TSMC's 5nm process node. Hopper is supposed to have two next-gen GPU modules so we are looking at 288 SM units in total. We can't give a rundown on the core count yet since we don't know the number of cores featured in each SMs but if it's going to stick to 64 cores per SM, then we get 18,432 cores which are 2.25x more than the full GA100 GPU configuration. NVIDIA could also leverage more FP64, FP16 & Tensor cores within its Hopper GPU which would drive up performance immensely. And that's going to be a necessity to rival Intel's Ponte Vecchio which is expected to feature 1:1 FP64.

It is likely that the final configuration will come with 134 of the 144 SM units enabled on each GPU module and as such, we are likely looking at a single GH100 die in action. But it is unlikely that NVIDIA would reach the same FP32 or FP64 Flops as MI200's without using GPU Sparsity.

Chinese Notebook Maker’s Latest Laptop, Hasee ZX9, Lets You Pair A GeForce RTX 3070 GPU With a Dual-Core Intel Celeron G6900 CPU For $1300 US

But NVIDIA may likely have a secret weapon in their sleeves and that would be the COPA-based GPU implementation of Hopper. NVIDIA talks about two Domain-Specialized COPA-GPUs based on next-generation architecture, one for HPC and one for DL segment. The HPC variant features a very standard approach which consists of an MCM GPU design and the respective HBM/MC+HBM (IO) chiplets but the DL variant is where things start to get interesting.  The DL variant houses a huge cache on an entirely separate die that is interconnected with the GPU modules.

Architecture LLC Capacity DRAM BW DRAM Capacity
Configuration (MB) (TB/s) (GB)
GPU-N 60 2.7 100
COPA-GPU-1 960 2.7 100
COPA-GPU-2 960 4.5 167
COPA-GPU-3 1,920 2.7 100
COPA-GPU-4 1,920 4.5 167
COPA-GPU-5 1,920 6.3 233
Perfect L2 infinite infinite infinite

Various variants have been outlined with up to 960 / 1920 MB of LLC (Last-Level-Cache), HBM2e DRAM capacities of up to 233 GB, and bandwidth of up to 6.3 TB/s. These are all theoretical but given that NVIDIA has discussed them now, we may likely see a Hopper variant with such a design during the full unveil at GTC 2022.

NVIDIA Hopper GH100 'Preliminary Specs':

NVIDIA Tesla Graphics CardTesla K40
(PCI-Express)
Tesla M40
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100 (SXM2)Tesla V100 (SXM2)NVIDIA A100 (SXM4)NVIDIA H100 (SMX4?)
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GP100 (Pascal)GV100 (Volta)GA100 (Ampere)GH100 (Hopper)
Process Node28nm28nm16nm16nm12nm7nm5nm
Transistors7.1 Billion8 Billion15.3 Billion15.3 Billion21.1 Billion54.2 BillionTBD
GPU Die Size551 mm2601 mm2610 mm2610 mm2815mm2826mm2TBD
SMs1524565680108134 (Per Module)
TPCs152428284054TBD
FP32 CUDA Cores Per SM1921286464646464?
FP64 CUDA Cores / SM6443232323232?
FP32 CUDA Cores2880307235843584512069128576 (Per Module)
17152 (Complete)
FP64 CUDA Cores9609617921792256034564288 (Per Module)?
8576 (Complete)?
Tensor CoresN/AN/AN/AN/A640432TBD
Texture Units240192224224320432TBD
Boost Clock875 MHz1114 MHz1329MHz1480 MHz1530 MHz1410 MHz~1400 MHz
TOPs (DNN/AI)N/AN/AN/AN/A125 TOPs1248 TOPs
2496 TOPs with Sparsity
TBD
FP16 ComputeN/AN/A18.7 TFLOPs21.2 TFLOPs30.4 TFLOPs312 TFLOPs
624 TFLOPs with Sparsity
779 TFLOPs (Per Module)?
1558 TFLOPs with Sparsity (Per Module)?
FP32 Compute5.04 TFLOPs6.8 TFLOPs10.0 TFLOPs10.6 TFLOPs15.7 TFLOPs19.4 TFLOPs
156 TFLOPs With Sparsity
24.2 TFLOPs (Per Module)?
193.6 TFLOPs With Sparsity?
FP64 Compute1.68 TFLOPs0.2 TFLOPs4.7 TFLOPs5.30 TFLOPs7.80 TFLOPs19.5 TFLOPs
(9.7 TFLOPs standard)
24.2 TFLOPs (Per Module)?
(12.1 TFLOPs standard)?
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM24096-bit HBM26144-bit HBM2e6144-bit HBM2e
Memory Size12 GB GDDR5 @ 288 GB/s24 GB GDDR5 @ 288 GB/s16 GB HBM2 @ 732 GB/s
12 GB HBM2 @ 549 GB/s
16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 900 GB/sUp To 40 GB HBM2 @ 1.6 TB/s
Up To 80 GB HBM2 @ 1.6 TB/s
Up To 100 GB HBM2e @ 3.5 Gbps
L2 Cache Size1536 KB3072 KB4096 KB4096 KB6144 KB40960 KB81920 KB
TDP235W250W250W300W300W400W~450-500W
Submit