NVIDIA Volta GV100 GPU Powers Updated DGX-1, HGX-1 and DGX-1 Supercomputing Stations – Available in Q3 2017 With Prices Up To $149K

Author Photo
May 10, 2017
45Shares
Submit

Alongside the insanely powerful Volta GV100 GPU, NVIDIA has also announced the next iteration of DGX-1 and HGX-1 supercomputing stations designed to power AI, Deep Learning and Neural Networking work loads.

NVIDIA Volta GV100 GPU Upgrades DGX-1 and HGX-1 Supercomputers With Irresponsible Amounts of Power

The Volta GV100 GPU will be powering three systems that are designed by NVIDIA, the DGX-1V, HGX-1V and the DGX Station. All three systems make use of multiple Tesla V100 graphics cards and will be aimed at a variety of users that include research specialists, cloud computing and personal computing. All of the systems carry insane amounts of power and very powerful specifications that come at a great cost.

empyre-lords-of-the-sea-gates-key-artRelatedEmpyre: Lords of the Sea Gates Review – Shallow Mafia Waterworld

And now, Jensen announces NVIDIA DGX-1 with eight Telsa v100.  It’s labeled on the slide as the “essential instrument of AI research. What used to take a week now takes a shift. It replaces 400 servers. It offers 960 tensor TFLOPS. It will ship in Q3. It will cost $149,000. He notes that if you get one now powered by Pascal, you’ll get a free upgrade to Volta.

Turns out, there’s also a small version of DGX-1,  DGXX Station. Think of it as a personal sized one. It’s liquid cooled and whisper quiet. Every one of our deep learning engineers has one.

It has four Tesla V100s. It’s $69K. Order it now and we’ll deliver it in Q3. “So place your order now,” he avers. via NVIDIA

NVIDIA DGX/HGX Supercomputers

NVIDIA Volta GV100 GPU Based DGX-1 Supercomputer For AI Research – $149,000 US Price

So first up, we have the NVIDIA DGX-1 which is a direct successor of the Pascal based DGX-1. This time, we are looking at 8 Tesla V100 GPUs instead of 8 Tesla P100 GPUs. The total horsepower of this machine has been boosted from 170 TFLOPs of FP16 compute to 960 TFLOPs of FP16 compute which is a direct effect of the new Tensor cores that are featured inside the Volta GV100 GPU core.

0016-4RelatedAMD A12 9800 R7 iGPU Performance Tested Against R7 250 dGPU [Video]

In terms of specifications, this machine rocks eight Tesla V100 GPUs with 5120 Cores each. This totals to 40,960 CUDA Cores and 5120 Tensor Cores. The DGX-1 houses a total of 128 GB of HBM2 memory on its eight Tesla V100 GPUs. The system features dual Intel Xeon E5-2698 V4 processors that come with 20 cores, 40 threads and clock in at 2.2 GHz. There’s 512 GB of DDR4 memory inside the system. The storage is provided in the form of four 1.92 TB SSDs configured in RAID 0, network is a dual 10 GbE with up to 4 IB EDR. The system comes with a 3.2 KW PSU.

The system is designed to access to today’s most popular deep learning frameworks, NVIDIA DIGITS deep learning training application, third-party accelerated solutions, the NVIDIA Deep Learning SDK (e.g. cuDNN, cuBLAS), CUDA toolkit, fast multi-GPU collectives NCCL, NVIDIA Docker and NVIDIA drivers. In terms of performance, the DGX-1 with Tesla V100 has a 10X speed up over PCI Express thanks to the new NVLINK 2.0 interconnect that is rated at 300 GB/s. The Deep Learning training time has sped up by a factor of 3x over the Pascal GP100 based system. The NVIDIA DGX-1 will cost $149,000 US and will be available in Q3 2017.

NVIDIA Volta GV100 GPU Based DGX Station – A Liquid Cooled Computing Powerhouse – $69,000 US Price

NVIDIA is also announcing a new Volta based system known as DGX station. This thing is similar to the Digits Devbox but comes with a modified specs list. The system is designed for personal power consumption and features 480 TFLOPs of FP16 performance. This is 3x the performance for deep learning training compared with today’s fastest GPU workstations. It also features 5x increase in the overall I/O performance over PCI-Express based systems. The total computing capacity of this workstation is equivalent to 400 CPUs which is impressive.

Specifications include four NVIDIA Tesla V100 GPUs which utilize the PCIe form factor. There’s a total of 20,480 CUDA Cores inside the system and an additional 2560 Tensor Cores. There’s 64 GB of HBM2 VRAM inside the system. Other specs include a Xeon E5-2698 V4 CPU, 256 GB of LRDIMM DDR4 system memory and four 1.92 TB SSDs of which three are configured in RAID 0 while the remaining one is configured for the OS. The total system power requirement is 1500W. The system is entirely liquid cooled for excellent cooling performance under full working load.

NVIDIA Tesla V100 in PCI-e form factor is a beast of a card, featuring 16 GB HBM2 VRAM and 5120 CUDA Cores.

Greater Deep Learning Performance in a Personal Supercomputer The new NVIDIA DGX Station is the world’s first personal supercomputer for AI development, with the computing capacity of 400 CPUs, consuming nearly 40x less power, in a form factor that fits neatly deskside.

Engineered for peak performance and deskside comfort, the DGX Station is the world’s quietest workstation, drawing one-tenth the noise as other deep learning workstations. Data scientists can use it for compute-intensive AI exploration, including training deep neural networks, inferencing and advanced analytics. Via NVIDIA

NVIDIA Volta GV100 GPU Based HGX-1 Supercomputer For Cloud Computing

NVIDIA also has a cloud computing option known as the HGX-1 which will be upgraded with the Volta Tesla V100 GPUs. The system also comes with 8 Tesla V100 GPUs configured with NVLINK Hybrid Cube interconnect. The platform is mainly aimed at Cloud Computing for GRID Graphics, CUDA HPC Stacks, NVIDIA Deep Learning stack.

That’s a whole bunch of announcements made by NVIDIA. With a launch suggested around Q3 2017, we are sure going to see these HPC, workstation and datacenter aimed machines in action.

NVIDIA DGX-1 and DGX Station Data Sheets:

NVIDIA Volta Tesla V100 Specs:

NVIDIA Tesla Graphics CardTesla K40
(PCI-Express)
Tesla M40
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100 (SXM2)Tesla V100 (PCI-Express)Tesla V100 (SXM2)
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GP100 (Pascal)GP100 (Pascal)GV100 (Volta)GV100 (Volta)
Process Node28nm28nm16nm16nm16nm12nm12nm
Transistors7.1 Billion8 Billion15.3 Billion15.3 Billion15.3 Billion21.1 Billion21.1 Billion
GPU Die Size551 mm2601 mm2610 mm2610 mm2610 mm2815mm2815mm2
SMs15245656568080
TPCs15242828284040
CUDA Cores Per SM1921286464646464
CUDA Cores (Total)2880307235843584358451205120
FP64 CUDA Cores / SM6443232323232
FP64 CUDA Cores / GPU9609617921792179225602560
Base Clock745 MHz948 MHzTBDTBD1328 MHzTBD1370 MHz
Boost Clock875 MHz1114 MHz1300MHz1300MHz1480 MHz1370 MHz1455 MHz
FP16 ComputeN/AN/A18.7 TFLOPs18.7 TFLOPs21.2 TFLOPs28.0 TFLOPs30.0 TFLOPs
FP32 Compute5.04 TFLOPs6.8 TFLOPs10.0 TFLOPs10.0 TFLOPs10.6 TFLOPs14.0 TFLOPs15.0 TFLOPs
FP64 Compute1.68 TFLOPs0.2 TFLOPs4.7 TFLOPs4.7 TFLOPs5.30 TFLOPs7.0 TFLOPs7.50 TFLOPs
Texture Units240192224224224320320
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM2
Memory Size12 GB GDDR5 @ 288 GB/s24 GB GDDR5 @ 288 GB/s12 GB HBM2 @ 549 GB/s16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 900 GB/s
L2 Cache Size1536 KB3072 KB4096 KB4096 KB4096 KB6144 KB6144 KB
TDP235W250W250W250W300W250W300W
Submit