NVIDIA Ampere GA100 GPU Powered Tesla A100: Worlds Largest 7nm GPU, 54 Billion Transistors, 1 Petaflops Compute & Up To 96 GB HBM2 Memory


NVIDIA has unveiled the GA100 GPU, their first and also the world's largest 7nm chip based on its next-gen Ampere GPU architecture. Featuring 20 times the performance of its predecessor, the Volta GPU, Ampere ushers in a new era of high-performance computing, being the first GPU in the world to deliver a peak compute power of greater than 1 Peta-Ops per second for AI/DNN.

NVIDIA Unveils The Worlds Largest 7nm GPU, The Ampere GA100 GPU - Powering The Tesla A100 With 54 Billion Transistors and Up To 96 GB Undisputed & Fastest HBM2 Memory

Powered by the next-generation Ampere GPU architecture, the Tesla A100 is an impressive board for the HPC market. The first thing that we have to talk about any HPC GPU is its specs & Ampere is a monster of a chip. NVIDIA went all out with  7nm process node, making GA100 the largest 7nm chip in production but that's not all, it's also the most advanced and feature-pack chip in the industry as of right now.

Ampere’s AmpereOne CPU With ARM Architecture To Launch Later This Year With DDR5 & PCIe 5.0 Support, Compares Intel & AMD x86 CPUs Against Altra Family

The Ampere GA100 GPU is once again based on a bleeding-edge 7nm process node and has a gargantuan count of 54 Billion transistors packed within it. The chip is expected to pack 128 SM units, equalling a total of 8192 CUDA cores. That alone is a 50% increase in the total number of cores. For memory, we are looking at six HBM stacks that point out a 6144-bit bus interface. The memory dies are definitely from Samsung who has been NVIDIA's strategic memory partner for HPC-centric GPUs.

NVIDIA's Ampere GA100 GPU is a massive chip featuring 54 billion transistors. (Image Credits: EETimes via Videocardz)

Samsung has recently announced its HBM2E DRAM which features 16 Gb dies. Depending on the height of the stacks, NVIDIA could offer anywhere from 48 GB (4-hi) to all the way up to 96 GB (8-Hi) which is just insane amounts of VRAM compared to the existing Tesla V100 which maxes out at 32 GB. The HBM2E stacks also deliver increased speeds of up to 3.2 Gbps, allowing for up to 410 GB/s bandwidth or 2.5 TB/s bandwidth or even faster if NVIDIA decides to go for the 4.2 Gbps dies that will result in 3.2 TB/s bandwidth for the entire chip which is an amazing technical feat.

In terms of performance, the Ampere GA100 GPU delivers 1 Peta-OPs which is a 20x increase over the Volta GV100 GPU. The double-precision performance is rated at 2.5x higher over NVIDIA's Volta GV100 GPU which should end up somewhere around 20 TFLOPs FP64 since Volta features around 8 TFLOPs FP64 compute power. This would mean that the single-precision performance is rated at over 40 TFLOPs (FP32) which would be mind-blowing for the HPC segment.

NVIDIA Ampere GA100 GPU Based Tesla A100 Specs:

NVIDIA Tesla Graphics CardNVIDIA H100 (SMX5)NVIDIA H100 (PCIe)NVIDIA A100 (SXM4)NVIDIA A100 (PCIe4)Tesla V100S (PCIe)Tesla V100 (SXM2)Tesla P100 (SXM2)Tesla P100
Tesla M40
Tesla K40
GPUGH100 (Hopper)GH100 (Hopper)GA100 (Ampere)GA100 (Ampere)GV100 (Volta)GV100 (Volta)GP100 (Pascal)GP100 (Pascal)GM200 (Maxwell)GK110 (Kepler)
Process Node4nm4nm7nm7nm12nm12nm16nm16nm28nm28nm
Transistors80 Billion80 Billion54.2 Billion54.2 Billion21.1 Billion21.1 Billion15.3 Billion15.3 Billion8 Billion7.1 Billion
GPU Die Size814mm2814mm2826mm2826mm2815mm2815mm2610 mm2610 mm2601 mm2551 mm2
FP32 CUDA Cores Per SM128128646464646464128192
FP64 CUDA Cores / SM128128323232323232464
FP32 CUDA Cores168961459269126912512051203584358430722880
FP64 CUDA Cores168961459234563456256025601792179296960
Tensor Cores528456432432640640N/AN/AN/AN/A
Texture Units528456432432320320224224192240
Boost ClockTBDTBD1410 MHz1410 MHz1601 MHz1530 MHz1480 MHz1329MHz1114 MHz875 MHz
TOPs (DNN/AI)2000 TOPs
4000 TOPs
1600 TOPs
3200 TOPs
1248 TOPs
2496 TOPs with Sparsity
1248 TOPs
2496 TOPs with Sparsity
130 TOPs125 TOPsN/AN/AN/AN/A
FP16 Compute2000 TFLOPs1600 TFLOPs312 TFLOPs
624 TFLOPs with Sparsity
312 TFLOPs
624 TFLOPs with Sparsity
32.8 TFLOPs30.4 TFLOPs21.2 TFLOPs18.7 TFLOPsN/AN/A
FP32 Compute1000 TFLOPs800 TFLOPs156 TFLOPs
(19.5 TFLOPs standard)
156 TFLOPs
(19.5 TFLOPs standard)
16.4 TFLOPs15.7 TFLOPs10.6 TFLOPs10.0 TFLOPs6.8 TFLOPs5.04 TFLOPs
FP64 Compute60 TFLOPs48 TFLOPs19.5 TFLOPs
(9.7 TFLOPs standard)
19.5 TFLOPs
(9.7 TFLOPs standard)
8.2 TFLOPs7.80 TFLOPs5.30 TFLOPs4.7 TFLOPs0.2 TFLOPs1.68 TFLOPs
Memory Interface5120-bit HBM35120-bit HBM2e6144-bit HBM2e6144-bit HBM2e4096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM2384-bit GDDR5384-bit GDDR5
Memory SizeUp To 80 GB HBM3 @ 3.0 GbpsUp To 80 GB HBM2e @ 2.0 GbpsUp To 40 GB HBM2 @ 1.6 TB/s
Up To 80 GB HBM2 @ 1.6 TB/s
Up To 40 GB HBM2 @ 1.6 TB/s
Up To 80 GB HBM2 @ 2.0 TB/s
16 GB HBM2 @ 1134 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 732 GB/s
12 GB HBM2 @ 549 GB/s
24 GB GDDR5 @ 288 GB/s12 GB GDDR5 @ 288 GB/s
L2 Cache Size51200 KB51200 KB40960 KB40960 KB6144 KB6144 KB4096 KB4096 KB3072 KB1536 KB

NVIDIA's Ampere GA100 also features a new Tensor operation compute indicator known as Tensor Float 32 or TF32 which is based on the 3rd Generation Tensor Cores, offering higher AI/DNN core output. The Tensor cores also natively support double-precision compute which allows the GA100 GPU to hit a 2.5x performance increase over its predecessor. As of right now, nothing from the competition that has been announced comes close to this beast.

Intel Arc A30M PRO Discrete Workstation Mobility GPU Confirmed By Dell

The DGX-A100 - The First HPC System With 140 Peta-OPs Compute Shipping Now For $199,000

Finally, NVIDIA will be announcing its next-generation DGX-A100 system which Jensen Huang teased a few days ago. The DGX-A100 will deliver 5 Petaflops of peak performance with its six Ampere based Tesla A100 GPUs.

The system itself is 20x faster than the previous DGX based on NVIDIA's Volta GPU architecture. The reference cluster design features 140 DGX-A100 GPUs with a 200 Gbps Mellanox Infiniband interconnect. The whole system is going to start at $199,000 and is shipping as of today.