NVIDIA Teases Ampere GPU Powered DGX A100 Supercomputing System Ahead of GTC 2020, Calls It The Worlds Largest Graphics Card!

Submit

NVIDIA has posted a teaser of its next-generation Ampere GPU powered DGX A100 system which is expected to be announced at GTC 2020 on 14th May 2020. The first references to the system were spotted just a week ago but it looks like we are definitely getting a major HPC announcement from NVIDIA this week.

NVIDIA's CEO, Jensen Huang, Teases Next-Generation Ampere GPU Powered DGX A100 System For HPC

The specific name for the DGX system is DGX A100 which has a lot to say. The DGX system is solely designed for the deep learning and HPC community, offering supercomputing capabilities inside a workstation form factor. NVIDIA has released DGX solutions based on its Pascal and Volta GPUs but with the release of Ampere GPU imminent, a new DGX solution has to be designed.

More NVIDIA GeForce RTX 3060 Ti Custom Models Pictured & Listed Online – RTX 3070 Also Gets New Budget Variants

In the teaser video which can be seen below, NVIDIA's CEO, Jensen Huang, can be seen taking out a huge DGX A100 mainboard fresh out of the oven. The video titled 'What's Jensen been cooking' is followed by the description 'The World's largest graphics card, fresh out of the oven".

The Volta line of DGX systems was streamlined to offer more options to HPC users. We saw several variants ranging from the DGX Station which featured a total of four Tesla V100 GPUs all the way to the 16 Tesla V100 housing DGX-2 monster which NVIDIA had termed as the "World's Largest GPU".

With Ampere GPU, NVIDIA would be releasing its latest DGX A100 system. The name makes it clear that the system would be based on the GA100 GPU. The GA100 GPU would be the biggest chip in the Ampere lineup and would definitely feature one of the flagship 128 SM configurations that we expect to see on an NVIDIA GA100 chip.

The specific DGX A100 mainboard that Jensen just cooked features a total of 8 Ampere GPUs and are outfitted with massive heatsinks. Do note that DGX A100 systems are designed for server/HPC environments and hence are passive cooled. There are six heatsinks adjacent to the GPUs which might be featuring interconnect switches for GPU-to-GPU & GPU-to-CPU communications. There's much to be revealed so I'd suggest we wait two more days for Jensen to reveal the goodies himself.

NVIDIA Allegedly Sold $175 Million Worth of Ampere GeForce RTX 30 GPUs To Crypto Miners, Could Be A Contributing Factor Behind Immense Shortages

NVIDIA Ampere GPU Powered DGX A100 System Fresh Out of the Oven by CEO Jensen Huang!

NVIDIA may start off its Ampere line of DGX systems in a more traditional manner, offering 8 Tesla GPU configurations in the beginning and moving on to the larger and denser parts later on as the yields get better for the new Ampere chips.

NVIDIA Tesla Graphics Cards Comparison

Tesla Graphics Card NameNVIDIA Tesla M2090NVIDIA Tesla K40NVIDIA Telsa K80NVIDIA Tesla P100NVIDIA Tesla V100NVIDIA Tesla Next-Gen #1NVIDIA Tesla Next-Gen #2NVIDIA Tesla Next-Gen #3
GPU ArchitectureFermiKeplerMaxwellPascalVoltaAmpere?Ampere?Ampere?
GPU Process40nm28nm28nm16nm12nm7nm?7nm?7nm?
GPU NameGF110GK110GK210 x 2GP100GV100GA100?GA100?GA100?
Die Size520mm2561mm2561mm2610mm2815mm2TBDTBDTBD
Transistor Count3.00 Billion7.08 Billion7.08 Billion15 Billion21.1 BillionTBDTBDTBD
CUDA Cores512 CCs (16 CUs)2880 CCs (15 CUs)2496 CCs (13 CUs) x 23840 CCs5120 CCs6912 CCs7552 CCs7936 CCs
Core ClockUp To 650 MHzUp To 875 MHzUp To 875 MHzUp To 1480 MHzUp To 1455 MHz1.08 GHz (Preliminary)1.11 GHz (Preliminary)1.11 GHz (Preliminary)
FP32 Compute1.33 TFLOPs4.29 TFLOPs8.74 TFLOPs10.6 TFLOPs15.0 TFLOPs~15 TFLOPs (Preliminary)~17 TFLOPs (Preliminary)~18 TFLOPs (Preliminary)
FP64 Compute0.66 TFLOPs1.43 TFLOPs2.91 TFLOPs5.30 TFLOPs7.50 TFLOPsTBDTBDTBD
VRAM Size6 GB12 GB12 GB x 216 GB16 GB48 GB24 GB32 GB
VRAM TypeGDDR5GDDR5GDDR5HBM2HBM2HBM2eHBM2eHBM2e
VRAM Bus384-bit384-bit384-bit x 24096-bit4096-bit4096-bit?3072-bit?4096-bit?
VRAM Speed3.7 GHz6 GHz5 GHz737 MHz878 MHz1200 MHz1200 MHz1200 MHz
Memory Bandwidth177.6 GB/s288 GB/s240 GB/s720 GB/s900 GB/s1.2 TB/s?1.2 TB/s?1.2 TB/s?
Maximum TDP250W300W235W300W300WTBDTBDTBD

NVIDIA's Ampere GPUs are definitely going to shake things up in the HPC market with several variants already leaked and performance being rated at around 30 TFLOPs (FP32). We will keep you updated as more info comes prior to the 14th of May when NVIDIA will be presenting its next-gen GPU lineup.

Submit