⋮    ⋮  

NVIDIA Unveils DGX SATURNV – World’s Most Efficient SuperComputer Powered by Pascal GP100, Delivers 9.46 Gigaflops/Watt

Submit

NVIDIA has announced their latest DGX SATURNV Supercomputer that is designed to build smarter cars and next generation GPUs. The DGX SATURNV is termed as the most efficient supercomputer and utilizes NVIDIA Pascal GPUs.

NVIDIA's DGX SATURNV SuperComputer Is The World's Most Efficient - Utilizes Tesla P100 GPUs

The DGX SATURNV is ranked 28th on the Top500 list of Supercomputers and is also the most efficient of them all. The Supercomputer houses several DGX-1 units, which is NVIDIA's custom designed server rack based on their Tesla P100 graphics chips. Right now, the most efficient machine on the Top500 list is rated at 6.67 Giga Flops/Watt. The NVIDIA designed DGX SATURNV delivers an incredible 9.46 GigaFlops/Watt which is a 42% improvement.

That efficiency is key to building machines capable of reaching exascale speeds — that’s 1 quintillion, or 1 billion billion, floating-point operations per second. Such a machine could help design efficient new combustion engines, model clean-burning fusion reactors, and achieve new breakthroughs in medical research. via NVIDIA

What Powers The DGX SATURNV?

Powering the NVIDIA GDX SATURNV are 124 DGX-1 units. The NVIDIA DGX-1 is a supercomputer inside a box and is capable of delivering large amounts of performance in a small package.

The NVIDIA DGX-1 is a complete supercomputing solution that houses NVIDIA’s latest hardware and software innovations ranging from Pascal and NVIDIA SDK suite. The DGX-1 has the performance throughput equivalent to 250 x86 servers. This insane amount of performance allows users to get their own supercomputer for HPC and AI specific workloads.

Assembled by a team of a dozen engineers using 124 DGX-1s — the AI supercomputer in a box we unveiled in April — SATURNV helps us build the autonomous driving software that’s a key part of our NVIDIA DRIVE PX 2 self-driving vehicle platform. via NVIDIA

Some of the key specifications of NVIDIA’s DGX-1 Unit include:

  • Up to 170 teraflops of half-precision (FP16) peak performance
  • Eight Tesla P100 GPU accelerators, 16GB memory per GPU
  • NVLink Hybrid Cube Mesh
  • 20 Core Broadwell-E "Xeon E5-2698 v4" CPU (2.2 GHz)
  • 7TB SSD DL Cache
  • Dual 10GbE, Quad InfiniBand 100Gb networking
  • 3U – 3200W

DGX-1 is an appliance that integrates deep learning software, development tools and eight of our Tesla P100 GPUs — based on our new Pascal architecture — to pack computing power equal to 250 x86 servers into a device about the size of a stove top. via NVIDIA

The Tesla P100 is the heart of the DGX-100 platform. Featuring the latest 5th generation Pascal architecture with 3584 CUDA Cores, 240 texture mapping units, clock speeds up to 1480 MHz and 16 GB of HBM2 VRAM (720 GB/s stream bandwidth), the DGX-1 is all prepped for the most intensive workloads pitted against it. The chi[ delivers 5.6 TFLOPs of FP64, 10.6 TFLOPs of FP32 and 21.2 TFLOPs of FP16 compute performance. It comes in a 300W package but delivers up to 17.7 GFLOPs/Watt at double precision compute.

“This system is internally at Nvidia for our self-driving car initiatives,” says Buck. “We are also using it for chip and wafer defect analysis and for our own sales and marketing analytics. We are also taking the framework we are using on this system and using it as the starting point for the CANDLE framework for cancer research. You only need 36 of these nodes to reach one petaflops, and it really speaks to our strategy of building strong nodes. The small number of nodes makes it really tractable for us to build a system like Saturn V.” via NextPlatform

The DGX SaturnV proves that NVIDIA Pascal GP100 was designed for the AI / Datacenter market, offering incredible amounts of power efficiency to this market along with increased performance from previous gen graphics processing units.

NVIDIA Volta Tesla V100S Specs:

NVIDIA Tesla Graphics CardTesla K40
(PCI-Express)
Tesla M40
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100 (SXM2)Tesla V100 (PCI-Express)Tesla V100 (SXM2)Tesla V100S (PCIe)
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GP100 (Pascal)GV100 (Volta)GV100 (Volta)GV100 (Volta)
Process Node28nm28nm16nm16nm12nm12nm12nm
Transistors7.1 Billion8 Billion15.3 Billion15.3 Billion21.1 Billion21.1 Billion21.1 Billion
GPU Die Size551 mm2601 mm2610 mm2610 mm2815mm2815mm2815mm2
SMs15245656808080
TPCs15242828404040
CUDA Cores Per SM1921286464646464
CUDA Cores (Total)2880307235843584512051205120
Texture Units240192224224320320320
FP64 CUDA Cores / SM6443232323232
FP64 CUDA Cores / GPU9609617921792256025602560
Base Clock745 MHz948 MHz1190 MHz1328 MHz1230 MHz1297 MHzTBD
Boost Clock875 MHz1114 MHz1329MHz1480 MHz1380 MHz1530 MHz1601 MHz
FP16 ComputeN/AN/A18.7 TFLOPs21.2 TFLOPs28.0 TFLOPs30.4 TFLOPs32.8 TFLOPs
FP32 Compute5.04 TFLOPs6.8 TFLOPs10.0 TFLOPs10.6 TFLOPs14.0 TFLOPs15.7 TFLOPs16.4 TFLOPs
FP64 Compute1.68 TFLOPs0.2 TFLOPs4.7 TFLOPs5.30 TFLOPs7.0 TFLOPs7.80 TFLOPs8.2 TFLOPs
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM
Memory Size12 GB GDDR5 @ 288 GB/s24 GB GDDR5 @ 288 GB/s16 GB HBM2 @ 732 GB/s
12 GB HBM2 @ 549 GB/s
16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 1134 GB/s
L2 Cache Size1536 KB3072 KB4096 KB4096 KB6144 KB6144 KB6144 KB
TDP235W250W250W300W250W300W250W
Submit