NVIDIA GeForce RTX 3090 TecLab Review Leaked: 10% Faster Than RTX 3080

TecLab appears to have gotten leaky again and published a detailed review of the RTX 3090 (spotted and compiled by Videocardz). Since the video was taken down the last time they did this, WhyCry took the liberty of compiling all the juicy bits and it seems that according to their review the RTX 3090 is on average just 10% faster than the RTX 3080 (while being twice as expensive). While this is certainly due to some bottleneck, these are very interesting (and controversial) results.

TecLab leaks NVIDIA RTX 3090 review: 10% faster than the RTX 3080

TecLabs is quickly approaching cult status in the leak scene as the husky mask-donned vigilantes break the usual embargo to publish juicy videos with details earlier than anyone else. This time around, the subject of the video is the anxiously anticipated RTX 3090. Spoiler alert, however, as the RTX 3090 only seems to be capable of generating 10% more performance than the RTX 3080 on the current state of drivers and game code.

Before we begin, the preliminaries are as follows: An Intel Core i9-10900 clocked at a solid 5 GHz was used with 32 GB of RAM at 4133 MHz. Needless to say, this is an absolutely solid compute configuration and more than exceeds the amount available in usual circumstances to gamers. A Galax HOF PRO M.2 1 TB SSD was used and a flurry of titles was tested. While the review doesn't explicitly mention the RTX 3080 and RTX 3090, they do say the 5000 YUAN flagship and the 10000 YUAN flagship so it is very clear what they are referring to.

The following data was compiled by Videocardz. As we can see below, the average performance increase across a panel of 16 synthetics and titles is roughly 10% (the exact number is actually 8.8% which is even less) delta over the RTX 3080. This is something that isn't in itself particularly surprising considering the RTX 3080 already doubles the CUDA core count of the RTX 2000 series and any software based bottlenecks would easily show up here as well as the RTX 3090.

NVIDIA GeForce RTX 3090 vs RTX 3080 (compilation by Videocardz)
Score / 4K AVG FPSRTX 3090RTX 30803090/3080
3DMark Time Spy Extreme99489000+10.5%
3DMark Port Royal1282711981+7.1%
Metro Exodus RTX/DLSS OFF54.448.8+10.2%
Metro Exodus RTX/DLSS ON74.567.6+10.2%
Rainbow Six Siege275260+5.8%
Horizon Zero Dawn8476+10.5%
Forza Horizon156149+4.7%
Far Cry10799+8.1%
Assassins Creed Oddysey7165+9.2%
Shadow of the Tomb Raider RTX/DLSS Off9183+9.6%
Shadow of the Tomb Raider RTX/DLSS On111102+8.8%
Borderlands 367.661.3+10.3%
Death Stranding DLSS/RTX ON175164+6.7%
Death Stranding DLSS/RTX OFF116104+11.5%
Control DLSS/RTX ON7165+9.2%
Control DLSS/RTX OFF6257+8.8%

We have also attached some screenshots of the benchmarks below but the question then becomes: is the RTX 3090 just 9% faster than the RTX 3080? and if so why? The first answer that comes to my mind is that the amount of core increase that we saw in the 3000 series is just too big for software stacks to handle. While the drivers would (probably) have been updated to handle the massive throughput, game code and engines have to scale up to take advantage of the available processing power as well. This is sort of like games being optimized primarily to take advantage of just 1 core and not scaling perfectly.

What we are seeing with the RTX 3080 and 3090 also appears to be a similar problem where the hardware is being bottlenecked by software. AMD's GPUs are usually fondly referred to as finewine but if my gut instinct is correct, the RTX 3000 series is going to turn out to be the biggest load of finewine silicon the gaming world has ever seen. With the cards only delivering half of the performance promised by NVIDIA I am fairly certain we are going to get massive incremental performance improvements pushed via software.

What do you think is the explaination behind the non linear scaling of shader count vs performance?
WccfTech Tv
Filter videos by