NVIDIA Volta Tesla V100 GPU Accelerator Compute Performance Revealed – Features A Monumental Increase Over Pascal Based Tesla P100

Author Photo
Sep 17, 2017
18Shares
Submit

NVIDIA’s flagship and the fastest graphics accelerator in the world, the Volta GPU based Tesla V100 is now shipping to customers around the globe. The new GPU is a marvel of engineering and it has a broad range of technologies such as the latest 12nm process, NVLINK 2.0, HBM 2.0, Tensor Cores and a highly efficient architecture design that make it the most suitable chip for heavy compute or AI (Deep Learning) workloads.

NVIDIA Volta GV100 GPU Based Tesla V100 Benchmarked – A Monumental Performance Increase in Geekbench Compute Test Over The Pascal GP100 Based Tesla P100

Released just a year after the Pascal based Tesla P100, the Volta based Tesla P100 bests its predecessor in every possible way. And just like its predecessor, the flagship is designed to head over to the deep learning and compute markets. At GTC 2017, we got to learn almost everything about the Volta GV100 GPU but now, we have got the first independent test results and they are a shocker.

dsc_0710-customRelatedMSI GeForce GTX 1080 Ti Gaming X Trio Review – The Most Stunning Tri Frozr With Dragon Design

Tested in Geekbench 4, the system used was an NVIDIA DGX-1. The DGX-1 is what NVIDIA calls a supercomputer inside a box. It’s a powerful machine that manages to deliver some astonishing performance results. As per official claims, the total horsepower on the DGX-1 has been boosted from 170 TFLOPs of FP16 compute to 960 TFLOPs of FP16 compute which is a direct effect of the new Tensor cores that are featured inside the Volta GV100 GPU core.

In terms of specifications, this machine rocks eight Tesla V100 GPUs with 5120 cores each. This totals 40,960 CUDA Cores and 5120 Tensor Cores. The DGX-1 houses a total of 128 GB of HBM2 memory on its eight Tesla V100 GPUs. The system features dual Intel Xeon E5-2698 V4 processors that come with 20 cores, 40 threads and clock in at 2.2 GHz. There’s 512 GB of DDR4 memory inside the system. The storage is provided in the form of four 1.92 TB SSDs configured in RAID 0, network is a dual 10 GbE with up to 4 IB EDR. The system comes with a 3.2 KW PSU. You can find more details here.

Now comes the part where we unveil the results. The NVIDIA DGX-1 currently features the fastest compute performance on the Geekbench 4 database. There’s no setup in sight that can dethrone this beast. The system can be compared to a HP Z8 G4 Workstation which features a total of nine PCIe slots and features a score of 278706 points in the OpenCL API with the Quadro GP100 which is essentially a Tesla P100 spec’d card. Moving over to the fastest Tesla P100 listing, we see a total of 8 PCIe cards configured to reach a score of 320031 in the CUDA API. But let’s take a look at the mind boggling Tesla V100 scores. A DGX-1 system with 8 SXM2 Tesla V100 cards scores 418504 in OpenCL API and a monumental 743537 points with the CUDA API.

dsc01005-100675954-origRelatedNVIDIA Witnessed Biggest GPU Market Share Increase During Q3 2017, Topples AMD With 30% Increased Shipments – AMD Shipments Increase By 8%, Intel’s By 5%

The score puts the Tesla V100 in an impressive lead over its predecessor which is something we are excited to see. It also shows that we can be looking at a generational leap in the gaming GPU segment if the performance numbers from the chip architecture carry over to the mainstream markets. Another thing which should be pointed out is the incredible tuning of compute output with the new CUDA API and related libraries. Not only is the Tesla V100 seeing big improvements over OpenCL but the same can be seen for the Tesla P100 which means that NVIDIA is really doing some hard work with their CUDNN framework and it’s expected to get even better in the coming generations. So there you have it, NVIDIA’s fastest GPU showing off some killer performance in its specified compute related workloads.

NVIDIA Volta Tesla V100 Specs:

NVIDIA Tesla Graphics CardTesla K40
(PCI-Express)
Tesla M40
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100 (SXM2)Tesla V100 (PCI-Express)Tesla V100 (SXM2)
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GP100 (Pascal)GP100 (Pascal)GV100 (Volta)GV100 (Volta)
Process Node28nm28nm16nm16nm16nm12nm12nm
Transistors7.1 Billion8 Billion15.3 Billion15.3 Billion15.3 Billion21.1 Billion21.1 Billion
GPU Die Size551 mm2601 mm2610 mm2610 mm2610 mm2815mm2815mm2
SMs15245656568080
TPCs15242828284040
CUDA Cores Per SM1921286464646464
CUDA Cores (Total)2880307235843584358451205120
FP64 CUDA Cores / SM6443232323232
FP64 CUDA Cores / GPU9609617921792179225602560
Base Clock745 MHz948 MHzTBDTBD1328 MHzTBD1370 MHz
Boost Clock875 MHz1114 MHz1300MHz1300MHz1480 MHz1370 MHz1455 MHz
FP16 ComputeN/AN/A18.7 TFLOPs18.7 TFLOPs21.2 TFLOPs28.0 TFLOPs30.0 TFLOPs
FP32 Compute5.04 TFLOPs6.8 TFLOPs10.0 TFLOPs10.0 TFLOPs10.6 TFLOPs14.0 TFLOPs15.0 TFLOPs
FP64 Compute1.68 TFLOPs0.2 TFLOPs4.7 TFLOPs4.7 TFLOPs5.30 TFLOPs7.0 TFLOPs7.50 TFLOPs
Texture Units240192224224224320320
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM2
Memory Size12 GB GDDR5 @ 288 GB/s24 GB GDDR5 @ 288 GB/s12 GB HBM2 @ 549 GB/s16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 900 GB/s
L2 Cache Size1536 KB3072 KB4096 KB4096 KB4096 KB6144 KB6144 KB
TDP235W250W250W250W300W250W300W
Submit