NVIDIA Shipping Working GP100 Based Tesla P100 Boards in June – HPC / Supercomputing First, OEM Availability in Q1 2017

Hassan Mujtaba
Posted Apr 6, 2016
93Shares
Share Tweet Submit

NVIDIA server partners have revealed the first actual Tesla P100 graphics cards that will be shipping this year. The Tesla P100 GPU was announced yesterday by Jen-Hsun Huang (CEO of NVIDIA) at the opening key note of GTC 2016. The Tesla P100 was already stated to be in volume production and will be shipping first to the deep learning / cloud computing servers and supercomputers in US in June 2016 and OEM availability in Q1 2017.

The Tesla P100 is the worlds fastest, hyperscale datacenter and HPC Graphics Card.

NVIDIA Already Has Working Tesla P100 Boards – 16nm Pascal in Full Volume Production

NVIDIA is clearly prepping the GP100 for their first hand customers at the moment which is the cloud and HPC market. Back in 2012, NVIDIA started shipping their high-end Kepler GK110 chips to supercomputers before it was available to either OEMs or consumers. Supercomputers and cloud servers are NVIDIA’s first priority which is further proven by the fact that the NVIDIA DGX-1 supercomputing platform will be shipping in June 2016 and other server partners will be able to sell it through their channels in Q1 2017 which is the date when NVIDIA will plan to ship GP100 to OEMs.

There’s no doubt that NVIDIA considers HPC market a priority of their latest GP100 GPU. We saw a two hour worth of talk by NVIDIA’s CEO on HPC / Deep Learning and AI acceleration yesterday. The announcements were insane when seen through a server computing perspective but that doesn’t mean NVIDIA is leaving behind the consumer market as a whole.

The NVIDIA DGX-1 is a complete supercomputing solution that houses several GP100 GPUs!

The majority of NVIDIA’s revenue still comes from gaming, it is the market NVIDIA is also known for and carters to the most. The gaming community should not be disappointed by the event as reports are that we could see consumer variants based on Pascal GPUs as early as Computex 2016.

Coming back to the topic, Computerbase managed to find the first batch of Tesla P100 GPUs which are shipping in the QuantaPlex T21W-3U rack from Quanta. We talked about this a day earlier and the specs of this unit are very high-end. We are looking at the latest Xeon E5-2600 V4 processors with up to a TB of DRAM clocked in at 2666 MHz, a hybrid storage solution that can support upto 12 SATA/SFF and 8 U.2 (x4) drives, a fully equipped NVLINK interconnect that uses Mezzanine slots to support up to 8 Tesla P100 GPUs and cooling is offered by a air cooled system.

GeForce GTX 1080M Details Show Up In Gaming Notebook – GDDR5X Not Included

NVIDIA’s GP100 GPU Is Real – Here’s What it Looks Like:

Image Credits/Source: Computerbase

You can note in the pictures above that the Tesla P100 shown in the units have two models, a week 41 unit with HBM1 and a week 43 unit with HBM2. This indicates that NVIDIA figured out the interposer design for the Pascal GPUs back in 2015. NVIDIA showed the same designs back in GTC 2015 held in Japan and waited for the supply of HBM2 memory which would later be equipped on their GP100 chip.

Although these units are not shown by NVIDIA themselves but by Quanta QT, it proves that NVIDIA had already been sampling GP100 GPUs with both HBM1 (2015) and HBM2 (2016) memory for quite some time. Quanta also showcased a live demo unit of their rack at GTC 2016 with working NVLINK. This is really interesting and with Samsung already mass producing HBM2 memory, Tesla P100 will be on its way to be housed in the next HPC platforms starting June 2016.

NVIDIA Pascal Tesla P100 To Power The Piz Daint Super Computer in 2016 – 7.8 PFLOPs Compute Performance

The NVIDIA Tesla P100 will be used to upgrade the Piz Daint supercomputer at the CSCS (Swiss National Super Computing Center). NVIDIA will be shipping over 4500 Pascal based GPUs later this year which will be used to deliver up to 7.8 PetaFLOPs of compute performance or 7.8 quadrillion mathematical calculations per second.

The Five Miracles That Ushered In The Development of the Pascal GP100 GPU.

Pascal is the most advanced GPU architecture ever built, delivering unmatched performance and efficiency to power the most computationally demanding applications. Pascal-based Tesla GPUs will allow researchers to solve larger, more complex problems that are currently out of reach in cosmology, materials science, seismology, climatology and a host of other fields. via NVIDIA

“CSCS scientists are using Piz Daint to tackle some of the most important computational challenges of our day, like modeling the human brain and uncovering new insights into the origins of the universe,” said Ian Buck, vice president of Accelerated Computing at NVIDIA. “Tesla GPUs deliver a massive leap in application performance, allowing CSCS to push the limits of scientific discovery.”

Tesla P100 With GP100 – The Fastest Supercomputing Chip in The World

NVIDIA’s Tesla P100 is the most fastest supercomputing chip in the world. It is based on an entirely new, 5th Generation CUDA architecture codenamed Pascal. The GP100 GPU which utilizes the Pascal architecture is at the heart of the Tesla P100 accelerator. NVIDIA has spend the last several years in the development of the new GPU and it will finally be shipping in June 2016 to supercomputers.

NVIDIA Unveils Tegra Parker SOC at Hot Chips - Built on 16nm TSMC Process, Features Pascal and Denver 2 Duo Architecture

The Tesla P100 comes with beefy specs. Starting off, we have a 16nm Pascal chip that measures in at 610mm2, features 15.3 Billion transistors and comes with 3584 CUDA cores. The full Pascal GP100 chip features up 3840 CUDA Cores. NVIDIA has redesigned their SMs (Streaming Multiprocessor) units and rearranged them to support 64 CUDA cores per SM block. The Tesla P100 has 56 of these blocks enabled while the full GP100 has 60 blocks in total. The chip comes with dedicated set of FP64 CUDA Cores. There are 32 FP64 cores per block and the whole GPU has 1792 dedicated FP64 cores.

The 16nm FinFET architecture allows maximum throughput of performance and clock rate. In the case of Tesla P100, we are looking at 1328 MHz base and 1480 MHz boost clock which allow NVIDIA to crunch up to 21.2 TFLOPs FP16, 10.6 TFLOPs FP32 and 5.3 TFLOPs FP64 compute performance. The Tesla P100 features the next generation HBM2 memory with 16 GB VRAM and comes with 300W TDP. More details on the architecture can be found here.

NVIDIA Pascal Tesla P100 Specs:

NVIDIA Tesla Graphics CardTesla K40
(PCI-Express)
Tesla M40
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100
(PCI-Express)
Tesla P100 (Mezzanine)
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GP100 (Pascal)GP100 (Pascal)
Process Node28nm28nm16nm16nm16nm
Transistors7.1 Billion8 Billion15.3 Billion15.3 Billion15.3 Billion
GPU Die Size551 mm2601 mm2610 mm2610 mm2610 mm2
SMs1524565656
CUDA Cores Per SM192128646464
CUDA Cores (Total)28803072358435843584
FP64 CUDA Cores / SM644323232
FP64 CUDA Cores / GPU96096179217921792
Base Clock745 MHz948 MHzTBDTBD1328 MHz
Boost Clock875 MHz1114 MHz1300MHz1300MHz1480 MHz
FP64 Compute1.68 TFLOPs0.2 TFLOPs4.7 TFLOPs4.7 TFLOPs5.30 TFLOPs
Texture Units240192224224224
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM24096-bit HBM2
Memory Size12 GB GDDR524 GB GDDR512 GB HBM216 GB HBM216 GB HBM2
L2 Cache Size1536 KB3072 KB4096 KB4096 KB4096 KB
TDP235W250W250W250W300W

Share Tweet Submit