NVIDIA GeForce RTX 2080 3DMark TimeSpy Score Leaked – Clocked At 2GHz And Beats A GTX 1080 Ti Without AI Cores
One of the more reliable twitter accounts, TumApisak, has leaked what appears to be the 3DMark TimeSpy score of the upcoming NVIDIA GeForce RTX 2080 benchmark and its very impressive - it beats out a GTX 1080 Ti. Please do heed the rumor tag and take this with a grain of salt - we do not know if the test was done using final drivers (which in the Turing's case can make a world of difference) and the run is almost certainly not using the Tensor Cores - which constitute almost 1/3 of the die space.
NVIDIA RTX 2080 Time Spy preliminary benchmark leaked - 37% faster than a GTX 1080 and 6% faster than a GTX 1080 Ti without using AI core (DLSS)
This initial benchmark showcases the rawest conventional increase in shader performance and has probably been done using preliminary drivers. Since synthetics cannot at this time take advantage of DLSS, it is worth noting that a very large part of the die (namely the Tensor cores) are not being used during this run. This means that what you are looking at is probably the lower bracket of performance uplift you can expect using Turing. With that said, and without any further ado, here is the benchmark:
The RTX 2080 manages to score more than 10,000 points on the 3DMark Time Spy benchmark. To put this into perspective, the GTX 1080 Ti achieves 9508 points in the same test while the GTX 1080 achieves a mere 7325 points. This means the RTX 2080 is 37% faster than the GTX 1080 and 6% more than a GTX 1080 Ti at conventional shading performance. I am fairly certain actual games with optimized drivers will be able to hit the 40% to 50% performance improvement sweet spot once these things actually hit the shelves.
Also, can we talk about the clocks? This appears to be a standard variant of the RTX 2080 and its clocked at 2025 MHz. This is much faster than the GTX 1080 and if this is true then we are looking at a performance boost. The Turing SMs also showcase their improved design here (based on the core count increase and clock speed increase, you should theoretically only get a 13-17% uplift).
Considering the price is almost twice that of the GTX 1080, does it justify it? Well, that is something only the buyer can decide, but it is worth mentioning at this point that the real selling point of Turing isn't the conventional shader engine, it's actually the AI core and the ray tracing cores which have been added in this generation. While older GPUs only had a shader core, the Turing has 3 different engines and in this test, only the shader engine is being used.
The AI core will allow NVIDIA to hit performance levels that were physically impossible under the shader count spec before. With DLSS, NVIDIA can use deep learning to accelerate FPS and performance to ridiculous levels and can prove to be the single biggest selling point of the graphics card series. Opinion about RT is conflicted right now, but I personally feel that a 40%-100% performance uplift with AI and RTX tech is a good value proposition for the company.