Nvidia Geforce GTX 1080 Ti Spotted In A Zauba Shipping Manifest – Features 10 GB Worth of vRAM And A Cheaper Price Tag Than A TITAN-X
The good folks over at Videocardz have spotted the shipping manifest at Zauba that we have all been waiting for. The anxiously anticipated GTX 1080 Ti appears to finally have been spotted in the wild. The manifest itself follows the same nomenclature that we have previously seen on Pascal based cards being shipped for testing and features a graphics card with 10 GB worth of vRAM.
Nvidia’s Pascal ‘GTX 1080 Ti’ flagship inbound – GP102 GPU spotted in the wild with 10 GB vRAM
The story of the GP102 based core started about 4 months back and was confirmed by a leaked change log by the AIDA64 devs. Nvidia has already released one variant of the core in the form of the TITAN-X for the ultra high end spectrum but we have yet to see one in the usual mainstream “Ti” format. Dubbed the Geforce GTX 1080 Ti (at least till we are sure of the name) the graphics card will bring the GP102 core into the mainstream with a price tag under $1000. The shipping manifest’s INR value, which is basically the insurance value of the chip, is cheaper than the TITAN-X’s. While INR does not directly translate to MSRP, we can safely infer that it will be cheaper than the TITAN-X. The exact type of GDDR (GDDR5 or 5X) is not mentioned.
The shipping manifest shows the memory at 10240 MB, which is a solid 10 GBs worth of vRAM. The memory bus is shown as a 384 bit part, although it remains to be seen whether the full configuration will be active. As for the core count itself, apart from the fact that this is a GP102 part, we really don’t know anything concrete. The TITAN-X itself has 3584 cores while the GTX 1080 has 2560. Either the GP102 part will have a configuration that is somewhere in between or it will have the same die as the TITAN-X (unlikely). There is a very slim possibility that Nvidia decides to go ahead with a full fledged core.
On paper, the GP100 had a total of 60 SMs or Streaming Multiprocessors. Each SM had a 2:1 ratio of FP32 cores to FP64 cores. This effectively meant that you were looking at 3840 CUDA Cores on the FP32 side of things and 1920 on the FP64 side. For a total of 5760 cores, the GP100 clocks in at just 610mm². Of course, as the more tech savvy would remember, the P100 Accelerator didn’t actually use the full GP100 die. It used one with 56 SMs for a total of 3584 FP32 Cores and 1792 FP64 Cores. The reasons for the lower amount of SM can only be attributed to bad yields (which are expected this early in a node and at this large a size) which is also one more reason why we should not expect the GP100 to power the GTX 1080 Ti (not a full one at any rate).
Of course, unlike the P100 accelerator, a gaming graphics card like the GTX 1080 Ti doesn’t actually need the FP64 cores. So there is no point in wasting valuable die space in hosting DP units. It is with this that we can begin the discussion of what we can expect from the GP102. Keep in mind however, that accelerators like the P100 do not have ROPs – so we are not really looking at a linear pay-off. There is also the fact that the GP102 will almost certainly not be as big as the GP100. Our usual sources have been very tight lipped about the this particular die but they did state that it would be “exactly half way” between the GP104 and the GP100.
Since the GP100 and GP104 are 610mm² and 314mm² respectively, the GP102 was going to be in the ballpark of 462mm² to 478mm². This was confirmed with the release of the TITAN-X GP102 variant. A chip of that size was theoretically capable of hosting the full compliment of the GP100 FP32 cluster, or in other words 3840 FP32 CUDA cores. But due to FP64 remaining constant, we are looking at anywhere from 3072 to 3584 CUDA cores. In any case, the amount of CUDA cores present on the die will be somewhere around this range (give or take a few SMs). According to our estimates, with that core count you are looking at power consumption within the range of 270W with GDDR5X. If Nvidia shifts to HBM2 for the GTX 1080 Ti, the consumption should fall within the power budget of 250W. According to our sources, HBM2 will start taping out in the third quarter of this year, so it is unclear whether Nvidia will opt to go with the same.
Nvidia GTX 1080 Ti 'Expected' Specifications Comparision
|NVIDIA Graphics Card||Tesla P100||GTX 1080 Ti*||GTX 1080|
|Transistors||15.3 Billion||10.8 Billion||7.2 Billion|
|GPU Die Size||610 mm2||471mm²||314mm²|
|CUDA Cores Per SM||64||64||64|
|FP32 CUDA Cores (Total)||3584||3072-3584 (TBC)||2560|
|FP64 CUDA Cores / SM||32||TBD||2|
|FP64 CUDA Cores / GPU||1792||TBD||80|
|Memory Interface||4096-bit HBM2||GDDR5X||256-bit GDDR5X|
|Memory Size||16 / 32 GB HBM2||10 GB GDDR5X||8 GB GDDR5X|