NVIDIA 16nm Pascal Based Tesla P100 With GP100 GPU Unveiled – Worlds First GPU With HBM2 and 10.6 TFLOPs of Compute On A Single Chip

Author Photo
Apr 5, 2016

NVIDIA has officially unveiled the Pascal based Tesla P100 GPU which is their fastest GPU to date. The Pascal GP100 chip is NVIDIA’s first GPU to be based on the latest 16nm FinFET process node which delivers 65 percent higher speed, around 2 times the transistor density increase and 70 percent less power than its 28HPM tech. The new FinFET process allows NVIDIA to gain up to 2 times the performance per watt improvement on Pascal compared to the Maxwell GPUs.

NVIDIA Pascal Tesla P100 Unveiled – 15.3 Billion Transistors on a 610mm2 16nm Die – 16 GB HBM2 Memory With Insane Compute

The NVIDIA Pascal Tesla P100 GPU revives the double precision compute technology on NVIDIA chips which was not featured on the Maxwell generation of cards. The Maxwell generation brought NVIDIA in the most competitive position with a lineup filled with amazing graphics card that won not only in performance per watt but also the performance to value segments. NVIDIA has developed a large ecosystem around their Maxwell cards which is now represented by the GeForce brand.

Related NVIDIA GeForce GT 1030 Pascal GP108 Graphics Card Officially Launched – Vast Army of Low Profile, Passive Cooled Models Starting at $69 US

With Pascal, NVIDIA will not only be aiming at the GeForce brand but also the high-performance Tesla market. The Tesla market is the action filled lineup where the big chips are aimed at. NVIDIA has received huge demand of next-generation chips in this market and they have prepped a range of next-gen chips specifically for the HPC market.

The GP100 GPU used in Tesla P100 incorporates multiple revolutionary new features and unprecedented performance. Key features of Tesla P100 include:

  • Extreme performance—powering HPC, deep learning, and many more GPU Computing areas;
  • NVLink—NVIDIA’s new high speed, high bandwidth interconnect for maximum application scalability;
  • HBM2—Fastest, high capacity, extremely efficient stacked GPU memory architecture;
  • Unified Memory and Compute Preemption—significantly improved programming model;
  • 16nm FinFET—enables more features, higher performance, and improved power efficiency.
Related AMD Radeon Vega Frontier Edition Vs NVIDIA Pascal Tesla P100 DeepBench Demo – Vega Beats Pascal In Deep Learning Capabilities, For Now

The current 28nm products have existed in the Tesla market since early 2012. This was the time when NVIDIA had started shipping the GK110 GPUs to built the Titan Supercomputer. The Tesla K20X was used to power the fastest supercomputer in the world at that time. When Maxwell came in the market, NVIDIA still had the bulk Kepler parts that were being sold for their high double precision compute, something that was amiss on Tesla Maxwell cards. While NVIDIA did launch Maxwell based Tesla cards later in the lineup which were aimed at the Cloud / Virtulization sectors, the top brass of NVIDIA’s FP64 crunching Tesla cards are arriving again with the new Tesla Pascal graphics cards.

Pascal GPU Roadmap Slides From GTC 2015 Showcasing The Architecture Updates on The Latest GPU.

The new Pascal GP100 GPU that is aimed at the Tesla market first features three key technologies, NVLINK, FP16 and HBM2. Those go along well with the architectural improvements in NVIDIA’s latest CUDA architecture.

NVIDIA Pascal GP100 With 10.6 TFLOPs Single and 5.3 TFLOPs Dual Precision Compute On A Single Graphics Card

NVIDIA Pascal GP100 GPU Architecture – The Building Blocks of NVIDIA’s HPC Accelerator Chip – 3840 CUDA Cores, Preemption and Return of Double Precision With a Bang

Like previous Tesla GPUs, GP100 is composed of an array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. GP100 achieves its colossal throughput by providing six GPCs, up to 60 SMs, and eight 512-bit memory controllers (4096 bits total). The Pascal architecture’s computational prowess is more than just brute force: it increases performance not only by adding more SMs than previous GPUs, but by making each SM more efficient. Each SM has 64 CUDA cores and four texture units, for a total of 3840 CUDA cores and 240 texture units.

Pascal GP100 Has Insane Clock Speeds – Near 1.5 GHz Boost Clocks

The Pascal GP100 comes with insane clock speeds of 1328 MHz core and 1480 MHz boost clock which is an insane leap and shows how the clock speed will scale even higher with the smaller chips so we can expect to see around 1500 MHz+ Pascal GPUs on the consumer market.

GP100’s SM incorporates 64 single-precision (FP32) CUDA Cores. In contrast, the Maxwell and Kepler SMs had 128 and 192 FP32 CUDA Cores, respectively. The GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, and two dispatch units. While a GP100 SM has half the total number of CUDA Cores of a Maxwell SM, it maintains the same register file size and supports similar occupancy of warps and thread blocks.


NVIDIA Volta Tesla V100 Specs:

NVIDIA Tesla Graphics CardTesla K40
Tesla M40
Tesla P100
Tesla P100
Tesla P100 (Mezzanine)Tesla V100 (Mezzanine)
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GP100 (Pascal)GP100 (Pascal)GV100 (Volta)
Process Node28nm28nm16nm16nm16nm12nm
Transistors7.1 Billion8 Billion15.3 Billion15.3 Billion15.3 Billion21.1 Billion
GPU Die Size551 mm2601 mm2610 mm2610 mm2610 mm2815mm2
CUDA Cores Per SM19212864646464
CUDA Cores (Total)288030723584358435845120
FP64 CUDA Cores / SM64432323232
FP64 CUDA Cores / GPU960961792179217922560
Base Clock745 MHz948 MHzTBDTBD1328 MHz1370 MHz
Boost Clock875 MHz1114 MHz1300MHz1300MHz1480 MHz1455 MHz
FP32 Compute5.04 TFLOPs6.8 TFLOPs~10.0 TFLOPs~10.0 TFLOPs10.6 TFLOPs15.0 TFLOPs
FP64 Compute1.68 TFLOPs0.2 TFLOPs4.7 TFLOPs4.7 TFLOPs5.30 TFLOPs7.50 TFLOPs
Texture Units240192224224224320
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM24096-bit HBM24096-bit HBM2
Memory Size12 GB GDDR524 GB GDDR512 GB HBM216 GB HBM216 GB HBM216 GB HBM2
L2 Cache Size1536 KB3072 KB4096 KB4096 KB4096 KB6144 KB

GP100’s SM has the same number of registers as Maxwell GM200 and Kepler GK110 SMs, but the entire GP100 GPU has far more SMs, and thus many more registers overall. This means threads across the GPU have access to more registers, and GP100 supports more threads, warps, and thread blocks in flight compared to prior GPU generations.

Overall shared memory across the GP100 GPU is also increased due to the increased SM count, and aggregate shared memory bandwidth is effectively more than doubled. A higher ratio of shared memory, registers, and warps per SM in GP100 allows the SM to more efficiently execute code. There are more warps for the instruction scheduler to choose from, more loads to initiate, and more per-thread bandwidth to shared memory (per thread).

On compute side, Pascal is going to take the next incremental step with double precision performance rated over 5.3 TFLOPs, which is more than double of what’s offered on the last generation FP64 enabled GPUs. As for single precision performance, we will see the Pascal GPUs breaking past the 10 TFLOPs barrier with ease. The chip comes with 4 MB of L2 cache. The GPU is in volume production and will be arriving to HPC markets very soon. On the mixed precision market, the Tesla P100 can achieve a maximum of 21 TFLOPs of FP16 compute performance which can process workloads at twice the compute precision of FP32.

Because of the importance of high-precision computation for technical computing and HPC codes, a key design goal for Tesla P100 is high double-precision performance. Each GP100 SM has 32 FP64 units, providing a 2:1 ratio of single- to double-precision throughput. Compared to the 3:1 ratio in Kepler GK110 GPUs, this allows Tesla P100 to process FP64 workloads more efficiently.

NVIDIA Pascal is Built on TSMC’s 16nm FinFET Process Node

The chip is based on the 16nm FinFET process which leads to efficiency improvements and better performance per watt but with Pascal, double precision compute returns with a bang. Maxwell which is NVIDIA’s current gen architecture made some serious gains in the performance per watt department and Pascal is expected to keep the tradition move forward.

TSMC’s 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving. By leveraging the experience of 20SoC technology, TSMC 16FF+ shares the same metal backend process in order to quickly improve yield and demonstrate process maturity for time-to-market value. via TSMC

GPU ArchitectureNVIDIA FermiNVIDIA KeplerNVIDIA MaxwellNVIDIA Pascal
GPU Process40nm28nm28nm16nm (TSMC FinFET)
Flagship ChipGF110GK210GM200GP100
GPU Design SM (Streaming Multiprocessor)SMX (Streaming Multiprocessor)SMM (Streaming Multiprocessor Maxwell)SMP (Streaming Multiprocessor Pascal)
Maximum Transistors3.00 Billion7.08 Billion8.00 Billion15.3 Billion
Maximum Die Size520mm2561mm2601mm2610mm2
Stream Processors Per Compute Unit32 SPs192 SPs128 SPs64 SPs
Maximum CUDA Cores512 CCs (16 CUs)2880 CCs (15 CUs)3072 CCs (24 CUs)3840 CCs (60 CUs)
FP32 Compute1.33 TFLOPs(Tesla)5.10 TFLOPs (Tesla)6.10 TFLOPs (Tesla)~12 TFLOPs (Tesla)
FP64 Compute0.66 TFLOPs (Tesla)1.43 TFLOPs (Tesla)0.20 TFLOPs (Tesla)~6 TFLOPs(Tesla)
Maximum VRAM1.5 GB GDDR56 GB GDDR512 GB GDDR516 / 32 GB HBM2
Maximum Bandwidth192 GB/s336 GB/s336 GB/s720 GB/s - 1 TB/s
Maximum TDP244W250W250W300W
Launch Year2010 (GTX 580)2014 (GTX Titan Black)2015 (GTX Titan X)2016

NVIDIA Pascal GP100 Is The First Single Chip GPU With HBM2 To Achieve 1 TB/s Bandwidth

Under the Tesla brand, NVIDIA will be introducing a range of HPC cards based on their GP100 GPU core which utilizes the Pascal architecture and delivers a behemoth 5.3 TFLOPs of double precision compute along with 16 GB of HBM2 VRAM clocked at 2 Gbps to deliver 1 TB/s bandwidth. This makes Pascal GP100 the first single GPU to achieve the 1 TB/s bandwidth which is an insane feat in itself. Not only is that insane, but GP100 is also the first graphics card in the world to feature the next-gen memory standard, HBM2 from Samsung.

The HBM2 VRAM has a lot of advantages in the graphics sector. Not only is it faster but it’s also scalable down and up to several different SKUs. The HBM2 VRAM has a much higher bus than GDDR5 memory, it comes with up to 1 TB/s bandwidth and less but not least, it allows HPC class graphics cards to feature up to 16 GB of VRAM which is crazy.

The next generation of NVIDIA Tesla GPUs which will be shipping to HPC users this year are already equipped and ready with HBM2 VRAM. NVIDIA is the first graphics card company to feature HBM2 on their GPUs with competition a whole year away from launching their HBM2 powered chips.

NVIDIA GP100 is a 12 TFLOPs GPU, Full Fat SKU Yet To Arrive With 32 GB HBM2

One of the surprising thing about today’s announcement is that the Tesla P100 isn’t based on the full fat GP100 GPU but a cut down version with 3584 CUDA Cores. The actual chip is a behemoth in terms of design, featuring up to 3840 CUDA Cores and 32 GB of HBM2 memory. Its possible that we will see a standard graphics board design later in the roadmap which will be able to achieve full 12 TFLOPs of processing power on board the new GP100 graphics processing unit.

Flagship GPUVega 10Navi 10?NVIDIA GP100NVIDIA GV100
GPU Process14nm FinFET7nm FinFET?TSMC 16nm FinFETTSMC 12nm FinFET
GPU Transistors15-18 BillionTBC15.3 Billion21.1 Billion
GPU Cores (Max)4096 SPsTBC3840 CUDA Cores5376 CUDA Cores
Peak FP32 Compute12.5 TFLOPsTBC12.0 TFLOPs15.0 TFLOPs
Peak FP16 Compute25.0 TFLOPsTBC24.0 TFLOPs120 Tensor TFLOPs
Memory (Consumer Cards)HBM2HBM3GDDR5XGDDR6
Memory (Dual-Chip Professional/ HPC)HBM2HBM3HBM2HBM2
HBM2 Bandwidth480 GB/s (Instinct MI25)>1 TB/s?732 GB/s (Peak)900 GB/s
Graphics ArchitectureNext Compute Unit (Vega)Next Compute Unit (Navi)5th Gen Pascal CUDA6th Gen Volta CUDA
Successor of (GPU)Radeon RX 500 Series?Radeon RX 600 Series?GM200 (Maxwell)GP100 (Pascal)

NVIDIA’s NVLINK Is a Fast GPU Interconnect Fabric With Speeds of 160 GB/s – Backbone of NVIDIA Powered Supercomputers

The Pascal GP100 GPU is a server and workstation class chip and since it is aimed at the HPC market first, the GPU would also introduce NVLINK which is the next generation Unified Virtual Memory link with Gen 2.0 Cache coherency features and 5 – 12 times the bandwidth of a regular PCIe connection. This will solve many of the bandwidth issues that high performance GPUs currently face.

NVLINK will allow several GPUs to be connected in parallel in HPC focused platforms that will feature several nodes fitted with Pascal GPUs for compute oriented workloads. The latest NVLINK interconnect path will allow multi-processors featured inside HPC blocks to have faster interconnect than traditional PCI-e Gen3 lanes up to 160 GB/s speeds. Pascal GPUs will also feature Unified memory support allowing the CPU and GPU to share the same memory pool and finally we have Mixed precision support. NVLINK will be featured in PCs using ARM64 chips and some x86 powered HPC servers that utilize OpenPower, Tyan and Quantum solutions.

The Pascal based Tesla GPU is the next incremental step in HPC acceleration. This is NVIDIA’s fastest graphics card to date for the professional market and we can’t wait for NVIDIA to release a consumer version of the GPU later this year. As stated before, the Pascal GPU will be shipping to cloud services first in 2016 followed by OEMs in Q1 2017.

NVIDIA Tesla Graphics Cards Comparison:

Tesla Graphics Card NameNVIDIA Tesla M2090NVIDIA Tesla K40NVIDIA Telsa K80NVIDIA Tesla P100NVIDIA Tesla V100
GPU Process40nm28nm28nm16nm12nm
GPU NameGF110GK110GK210 x 2GP100GV100
Die Size520mm2561mm2561mm2610mm2815mm2
Transistor Count3.00 Billion7.08 Billion7.08 Billion15 Billion21.1 Billion
CUDA Cores512 CCs (16 CUs)2880 CCs (15 CUs)2496 CCs (13 CUs) x 23840 CCs5120 CCs
Core ClockUp To 650 MHzUp To 875 MHzUp To 875 MHzUp To 1480 MHzUp To 1455 MHz
FP32 Compute1.33 TFLOPs4.29 TFLOPs8.74 TFLOPs10.6 TFLOPs15.0 TFLOPs
FP64 Compute0.66 TFLOPs1.43 TFLOPs2.91 TFLOPs5.30 TFLOPs7.50 TFLOPs
VRAM Size6 GB12 GB12 GB x 216 GB16 GB
VRAM Bus384-bit384-bit384-bit x 24096-bit4096-bit
VRAM Speed3.7 GHz6 GHz5 GHz737 MHz878 MHz
Memory Bandwidth177.6 GB/s288 GB/s240 GB/s720 GB/s900 GB/s
Maximum TDP250W300W235W300W300W