NVIDIA Volta Tesla V100 GPU Accelerator Compute Performance Revealed – Features A Monumental Increase Over Pascal Based Tesla P100

NVIDIA's flagship and the fastest graphics accelerator in the world, the Volta GPU based Tesla V100 is now shipping to customers around the globe. The new GPU is a marvel of engineering and it has a broad range of technologies such as the latest 12nm process, NVLINK 2.0, HBM 2.0, Tensor Cores and a highly efficient architecture design that make it the most suitable chip for heavy compute or AI (Deep Learning) workloads.

NVIDIA Volta GV100 GPU Based Tesla V100 Benchmarked - A Monumental Performance Increase in Geekbench Compute Test Over The Pascal GP100 Based Tesla P100

Released just a year after the Pascal based Tesla P100, the Volta based Tesla P100 bests its predecessor in every possible way. And just like its predecessor, the flagship is designed to head over to the deep learning and compute markets. At GTC 2017, we got to learn almost everything about the Volta GV100 GPU but now, we have got the first independent test results and they are a shocker.

Related StoryJason R. Wilson
ELSA Unveils Its GeForce RTX 3090 LC Graphics Card, Features AIO 360mm Cooling With Custom AlphaCool Waterblock For $2275 US

Tested in Geekbench 4, the system used was an NVIDIA DGX-1. The DGX-1 is what NVIDIA calls a supercomputer inside a box. It's a powerful machine that manages to deliver some astonishing performance results. As per official claims, the total horsepower on the DGX-1 has been boosted from 170 TFLOPs of FP16 compute to 960 TFLOPs of FP16 compute which is a direct effect of the new Tensor cores that are featured inside the Volta GV100 GPU core.

In terms of specifications, this machine rocks eight Tesla V100 GPUs with 5120 cores each. This totals 40,960 CUDA Cores and 5120 Tensor Cores. The DGX-1 houses a total of 128 GB of HBM2 memory on its eight Tesla V100 GPUs. The system features dual Intel Xeon E5-2698 V4 processors that come with 20 cores, 40 threads and clock in at 2.2 GHz. There’s 512 GB of DDR4 memory inside the system. The storage is provided in the form of four 1.92 TB SSDs configured in RAID 0, network is a dual 10 GbE with up to 4 IB EDR. The system comes with a 3.2 KW PSU. You can find more details here.

Now comes the part where we unveil the results. The NVIDIA DGX-1 currently features the fastest compute performance on the Geekbench 4 database. There's no setup in sight that can dethrone this beast. The system can be compared to a HP Z8 G4 Workstation which features a total of nine PCIe slots and features a score of 278706 points in the OpenCL API with the Quadro GP100 which is essentially a Tesla P100 spec'd card. Moving over to the fastest Tesla P100 listing, we see a total of 8 PCIe cards configured to reach a score of 320031 in the CUDA API. But let's take a look at the mind boggling Tesla V100 scores. A DGX-1 system with 8 SXM2 Tesla V100 cards scores 418504 in OpenCL API and a monumental 743537 points with the CUDA API.


The score puts the Tesla V100 in an impressive lead over its predecessor which is something we are excited to see. It also shows that we can be looking at a generational leap in the gaming GPU segment if the performance numbers from the chip architecture carry over to the mainstream markets. Another thing which should be pointed out is the incredible tuning of compute output with the new CUDA API and related libraries. Not only is the Tesla V100 seeing big improvements over OpenCL but the same can be seen for the Tesla P100 which means that NVIDIA is really doing some hard work with their CUDNN framework and it's expected to get even better in the coming generations. So there you have it, NVIDIA's fastest GPU showing off some killer performance in its specified compute related workloads.

NVIDIA Volta Tesla V100S Specs:

NVIDIA Tesla Graphics CardTesla K40
Tesla M40
Tesla P100
Tesla P100 (SXM2)Tesla V100 (PCI-Express)Tesla V100 (SXM2)Tesla V100S (PCIe)
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GP100 (Pascal)GV100 (Volta)GV100 (Volta)GV100 (Volta)
Process Node28nm28nm16nm16nm12nm12nm12nm
Transistors7.1 Billion8 Billion15.3 Billion15.3 Billion21.1 Billion21.1 Billion21.1 Billion
GPU Die Size551 mm2601 mm2610 mm2610 mm2815mm2815mm2815mm2
CUDA Cores Per SM1921286464646464
CUDA Cores (Total)2880307235843584512051205120
Texture Units240192224224320320320
FP64 CUDA Cores / SM6443232323232
FP64 CUDA Cores / GPU9609617921792256025602560
Base Clock745 MHz948 MHz1190 MHz1328 MHz1230 MHz1297 MHzTBD
Boost Clock875 MHz1114 MHz1329MHz1480 MHz1380 MHz1530 MHz1601 MHz
FP16 ComputeN/AN/A18.7 TFLOPs21.2 TFLOPs28.0 TFLOPs30.4 TFLOPs32.8 TFLOPs
FP32 Compute5.04 TFLOPs6.8 TFLOPs10.0 TFLOPs10.6 TFLOPs14.0 TFLOPs15.7 TFLOPs16.4 TFLOPs
FP64 Compute1.68 TFLOPs0.2 TFLOPs4.7 TFLOPs5.30 TFLOPs7.0 TFLOPs7.80 TFLOPs8.2 TFLOPs
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM
Memory Size12 GB GDDR5 @ 288 GB/s24 GB GDDR5 @ 288 GB/s16 GB HBM2 @ 732 GB/s
12 GB HBM2 @ 549 GB/s
16 GB HBM2 @ 732 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 900 GB/s16 GB HBM2 @ 1134 GB/s
L2 Cache Size1536 KB3072 KB4096 KB4096 KB6144 KB6144 KB6144 KB
WccfTech Tv
Filter videos by