⋮    ⋮  

NVIDIA Unveils Hopper GH100 Powered DGX H100, DGX Pod H100, H100 PCIe Accelerators


At GTC 2022, NVIDIA is announcing a range of products powered by its brand new Hopper GH100 GPUs such as the DGX H100, DGX SuperPod & several H100 PCIe accelerators.

NVIDIA Unveils Hopper GH100 GPU Lineup: Featuring DGX H100, DGX SuperPod H100 & H100 PCIe Accelerators

The NVIDIA DGX H100 and its various iterations such as the POD and EOS are aimed at the AI market, accelerating machine learning and data science performance for corporate offices, research facilities, labs, or home offices everywhere.

Watch The NVIDIA Computex 2022 Keynote Live Here – Gaming, Content Creation, Data Center & Networking Announcements

NVIDIA DGX H100 System Specifications

With Hopper GPU, NVIDIA is releasing its latest DGX H100 system. The system is equipped with a total of 8 H100 accelerators in the SXM configuration and offers up to 640 GB of HBM3 memory & up to 32 PFLOPs of peak compute performance. For comparison, the existing DGX A100 system is equipped with 8 A100 GPUs with 640 GB HBM2e memory and only produces a maximum of 5 PFLOPs of AI and 10 PFLOPs of INT8 compute power.

There are also two supercomputing platforms that NVIDIA has announced powered by their DGX H100 systems, the DGX POD H100 and EOS. The DGX POD offers 1 Exaflop of AI performance, has 20 TB of HBM3 memory, 192 TFLOPs of SHARP In-Network Compute, and 70 TB/s of Bi-Direction bandwidth. The DGX POD NVLINK Switch features support 20.5 TB of total HBM3 memory & 786 TB/s of total system memory bandwidth.

EOS takes things to the next level with its 18 DGX H100 PODS, featuring 18 EFLOPs FP8, 9 EFLOPs of FP16, 275 PFLOPs of FP64, 3.7 PFLOPs of In-Network Compute, and 230 TB/s of bandwidth. The AI system is designed with the new Quantum-2 Infiniband switch which features 57 Billion transistors, and 32x AI accelerators over A100 systems.

NVIDIA DGX H100 System Specifications

Coming to the specifications, the NVIDIA DGX H100 is powered by a total of eight H100 Tensor Core GPUs.

NVIDIA GeForce RTX 4080 Graphics Card Specs, Performance, Price & Availability – Everything We Know So Far

The system itself houses the 5th Generation Intel CPUs with full PCIe Gen 5 support. Display output is provided through a discrete DGX Display Adapter card which offers 4 DisplayPort outputs with up to 4K resolution support. The AIC features its own active cooling solution. The system features four NVSwitches, 2 TB of system memory, two 1.9 TB NVMe M.2 storage for the operating system, and 8 3.84 TB NVMe U.2 SSDs for internal storage.

Talking about the cooling solution, the DGX H100 houses the H100 GPUs on the rear side of the chassis. All four GPUs and the CPU are supplemented by a refrigerant cooling system which is whisper quiet and also maintenance-free. The compressor for the cooler is located within the DGX chassis. Power consumption is rated at 10.2kW (peak).

NVIDIA H100 PCIe Specifications

Lastly, we have the NVIDIA Hopper GH100 powered H100 PCIe accelerator. Unlike the H100 SXM5 configuration, the H100 PCIe offers cut-down specifications, featuring 114 SMs enabled out of the full 144 SMs of the GH100 GPU and 132 SMs on the H100 SXM. The chip as such offers 3200 FP8, 1600 TF16, 800 FP32, and 48 TFLOPs of FP64 compute horsepower. It also features 456 Tensor & Texture Units. Due to its lower peak compute horsepower, the H100 PCIe should operate at lower clocks and as such, features a TDP of 350W versus the double 700W TDP of the SXM5 variant. But the PCIe card will retain its 80 GB memory featured across a 5120-bit bus interface but in HBM2e variation (>2 TB/s bandwidth).

Press Release: NVIDIA today announced the fourth-generation NVIDIA DGX system, the world’s first AI platform to be built with new NVIDIA H100 Tensor Core GPUs.

DGX H100 systems deliver the scale demanded to meet the massive compute requirements of large language models, recommender systems, healthcare research, and climate science. Packing eight NVIDIA H100 GPUs per system, connected as one by NVIDIA NVLink, each DGX H100 provides 32 petaflops of AI performance at new FP8 precision — 6x more than the prior generation.

DGX H100 systems are the building blocks of the next-generation NVIDIA DGX POD and NVIDIA DGX SuperPOD AI infrastructure platforms. The latest DGX SuperPOD architecture features a new NVIDIA NVLink Switch System that can connect up to 32 nodes with a total of 256 H100 GPUs.

Announcing NVIDIA Eos — World’s Fastest AI Supercomputer
NVIDIA will be the first to build a DGX SuperPOD with the groundbreaking new AI architecture to power the work of NVIDIA researchers advancing climate science, digital biology, and the future of AI.

Its “Eos” supercomputer is expected to be the world’s fastest AI system after it begins operations later this year, featuring a total of 576 DGX H100 systems with 4,608 DGX H100 GPUs.

NVIDIA Eos is anticipated to provide 18.4 exaflops of AI computing performance, 4x faster AI processing than the Fugaku supercomputer in Japan, which is currently the world’s fastest system. For traditional scientific computing, Eos is expected to provide 275 petaflops of performance.

Eos will serve as a blueprint for advanced AI infrastructure from NVIDIA, as well as its OEM and cloud partners.

Enterprise AI Scales Easily With DGX H100 Systems, DGX POD, and DGX SuperPOD
DGX H100 systems easily scale to meet the demands of AI as enterprises grow from initial projects to broad deployments.

In addition to eight H100 GPUs with an aggregated 640 billion transistors, each DGX H100 system includes two NVIDIA BlueField-3 DPUs to offload, accelerate and isolate advanced networking, storage, and security services.

Eight NVIDIA ConnectX-7 Quantum-2 InfiniBand networking adapters provide 400 gigabits per second throughput to connect with computing and storage — double the speed of the prior generation system. And a fourth-generation NVLink, combined with NVSwitch, provides 900 gigabytes per second connectivity between every GPU in each DGX H100 system, 1.5x more than the prior generation.

DGX H100 systems use dual x86 CPUs and can be combined with NVIDIA networking and storage from NVIDIA partners to make flexible DGX PODs for AI computing at any size.

DGX SuperPOD provides a scalable enterprise AI center of excellence with DGX H100 systems. The DGX H100 nodes and H100 GPUs in a DGX SuperPOD are connected by an NVLink Switch System and NVIDIA Quantum-2 InfiniBand providing a total of 70 terabytes/sec of bandwidth – 11x higher than the previous generation. Storage from NVIDIA partners will be tested and certified to meet the demands of DGX SuperPOD AI computing.

Multiple DGX SuperPOD units can be combined to provide the AI performance needed to develop massive models in industries such as automotive, healthcare, manufacturing, communications, retail, and more.

NVIDIA DGX H100 systems, DGX PODs, and DGX SuperPODs will be available from NVIDIA’s global partners starting in the third quarter.