NVIDIA Volta Tesla V100 Cards Detailed – 150W Single-Slot & 300W Dual-Slot GV100 Powered PCIe Accelerators

Author Photo
May 10, 2017
152Shares
Submit

NVIDIA announced today two next generation cards based on its Volta graphics architecture and GV100 GPU. The new Tesla V100 accelerators will come in two different PCIe form factors, a 150W single-slot full height, half length design and a standard 300W dual-slot design. Both designs will house NVIDIA’s next generation GV100 GPU featuring 5120 Volta CUDA cores with 16GB of HBM2.

NVIDIA Telsa V100 Accelerator – 150W Single-Slot and 300W Dual-Slot PCIe Cards

The GV100 Volta GPU that sits at the heart of each of these upcoming Tesla accelerators is a massive 815mm² chip with over 21 billion transistors built on TSMC’s new 12nm FinFET manufacturing process. At 1455MHz the Tesla V100 delivers 15 TFLOPS of single precision compute and 7.5 TFLOPS of double precision compute at 300W. It’s worthy of note that just like the P100, the V100 does not feature a fully unlocked GPU. The GV100 GPU houses 5376 CUDA cores but only 5120 are functional in the Tesla V100.

nvidia-volta-gv100-tesla-v100_1RelatedWatch NVIDIA’s Computex 2017 Keynote Livestream Here

NVIDIA Tesla V100 300W PCIe Accelerator

Tesla ProductTesla K40Tesla M40Tesla P100Tesla V100
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GV100 (Volta)
SMs15245680
TPCs15242840
FP32 Cores / SM1921286464
FP32 Cores / GPU2880307235845120
FP64 Cores / SM6443232
FP64 Cores / GPU9609617922560
Tensor Cores / SMNANANA8
Tensor Cores / GPUNANANA640
GPU Boost Clock810/875 MHz1114 MHz1480 MHz1455 MHz
Peak FP32 TFLOP/s*5.046.810.615
Peak FP64 TFLOP/s*1.682.15.37.5
Peak Tensor Core TFLOP/s*NANANA120
Texture Units240192224320
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM2
Memory SizeUp to 12 GBUp to 24 GB16 GB16 GB
L2 Cache Size1536 KB3072 KB4096 KB6144 KB
Shared Memory Size / SM16 KB/32 KB/48 KB96 KB64 KBConfigurable up to 96 KB
Register File Size / SM256 KB256 KB256 KB256KB
Register File Size / GPU3840 KB6144 KB14336 KB20480 KB
TDP235 Watts250 Watts300 Watts300 Watts
Transistors7.1 billion8 billion15.3 billion21.1 billion
GPU Die Size551 mm²601 mm²610 mm²815 mm²
Manufacturing Process28 nm28 nm16 nm FinFET+12 nm FFN

For hyperscale datacenters NVIDIA has managed to cram that same 815mm² GV100 GPU into a card the size of a CD case. At half the power the 150W hyperscale Tesla V100 naturally won’t be as fast as its 300W bigger brother but it’s close. How close? NVIDIA isn’t disclosing that information just yet.

NVIDIA’s Volta Architecture & The GV100 GPU

NVIDIA’s new Volta architecture manages to deliver 40% better performance/watt compared to Pascal and houses 7% more CUDA cores/mm² and 6% better performance/mm². This is thanks to a combination of the more efficient and higher density 12nm FinFET process as well as due to architectural refinements of the original Pascal architecture.

Each Volta SM — Streaming Multiprocessor — still houses 64 CUDA cores just like Pascal. However, volta features a slightly different SM partitioning. While in Pascal each SM was partitioned into two blocks, in Volta each SM is partitioned into four blocks. each with 16 FP32 cores, 8 FP64 cores, 16 INT32 cores and two brand new cores called Tensor cores.

amd-nvidia-feature-2RelatedSK Hynix Lists HBM2 & GDDR6 For AMD’s Vega & NVIDIA’s Volta In Latest Memory Databook

This is another area where GV100 differs from GP100. Each Volta GV100 SM includes separate FP32 and INT32 cores which can simultaneously execute FP32 and INT32 operations at full throughput. Whilst GP100 only featured FP32 cores which were capable of executing either FP32 or INT32 operations at any given time.

Tensor cores are mixed precision FP32/FP16  4×4 arrays. Each array is able to accelerate the execution of what NVIDIA calls Tensor operations by a factor of 6 compared to traditional FP64 cores. This allows Volta to deliver 6x higher inferencing throughput per clock compared to Pascal and 12x the deep-learning throughput per clock.

The key architectural improvements from Pascal to Volta include :

  • New mixed-precision FP16/FP32 Tensor Cores purpose-built for deep learning matrix arithmetic;
  • Enhanced L1 data cache for higher performance and lower latency;
  • Streamlined instruction set for simpler decoding and reduced instruction latencies;
  • Higher clocks and higher power efficiency.

 

 

Submit