Why are NVIDIA GeForce GTX 400 series GPUs Unimpressive

Posted Mar 29, 2010
10Shares
Share Tweet Submit

After a wait of nearly six months and several promises, we had high hopes for NVIDIA’s Fermi GPUs. The next level in desktop gaming, as claimed by NVIDIA has now finally arrived, and the word on the street is that, its not very impressive so far. But ironically, its not because the architecture lacks some basic features, its because its overstuffed with enhancements that don’t really contribute to the bottom line when it comes to gaming. Dive in for a deeper look on how the new GeForce 400 cards stack up against the Radeon 5000 series.

Understanding the market

Like I debated a few weeks ago in my article on GPU market outlook for 2010, the desktop market is divided into four major classes of consumers – which are Enthusiast, Performance, Mainstream, and Budget. Each have their own demands and needs along with different perception of value, and quite frankly, one size (or idea) rarely fits all in this industry. So coming up with a new product to satisfy a major chunk (if not all) of the different segments is a rough road to navigate on for all the major hardware vendors out there.

The three Ps that derive the perceived value of a hardware product, like a graphics card are Price, Performance, and Power consumption. There are also two more quantities in play but they are strongly dependant on power consumption instead. They are Heat, which is directly dependant on power and Noise, which is function of the surface temperature.

Lets start with the Enthusiast segment, and analyze what they look for in a new product. Out of the three Ps, the enthusiast segment only focuses on raw Power of the product (in this case, a GPU). These are the people who don’t let money get in the way of their needs, and that knocks out the other two Ps from the equation because for them, money is nothing compared to performance and they can always buy a beefier power supply with a high end cooling solution. These people however are a vast minority among the total consumer market, and ironically, most hardware vendors tend to target this segment whenever they are launching a new product.

Next up, is the Performance segment, which do crave for high performance, but within accepted bounds of Price and therefore, power efficiency. These people are willing to spend money as long as it adds an equal share of performance to the equation, and do consider add-on features that don’t really affect the performance figures. More often than not, they would settle for something that only costs about 60%-70% of the next Enthusiast class offering, because usually beyond that point, increasing prices to amount to same increase in performance, therefore decreasing the value. These people are greater in number compared to the enthusiasts but still not a commanding majority of the market. And yes, I happen to be one of them.

The Mainstream crowd, as the name suggests are what make the major chunk of the market. Their perceived value of a product is more dependant on the Price/Performance ratio than anything else. If the performance doesn’t justify the price, they won’t buy it. Its that simple. This makes this segment a tough crowd to impress, and that’s mostly because they won’t be willing to spend more than a $200 on a single product like a graphics card. Most of these people also like to live in the here and now instead of putting a lot of emphasis on stuff that would make them “future proof”.

At the bottom are the Budget focused individuals, who often let the Price be the crusher of their Performance dreams mostly because they can’t really spend more than $100 on a product. Interestingly, this segment also includes those people who are looking for graphics processors for more unconventional needs like HTPC. They tend to go for the best feature set within their budgets because raw gaming muscle really isn’t something they are looking for.

Where do the GTX 480/470 fit in?

So where does the GeForce GTX 480 and GTX 470 lie in this picture? Well lets just go through their official price and performance numbers. The GeForce GTX 480 has a suggested price tag of $499, and consumes about 250W of power according to NVIDIA (though some reviews have revealed that it ends up eating more power than the Dual-GPU Radeon HD 5970 with a TDP of 294W). And from the numbers we have seen around the web, it ends being about 5%-20% more powerful compared to the Radeon HD 5870, which is currently selling for $420, and is rated at 188W.

Similarly, the GeForce GTX 470 will go for a suggested price tag of $350, and will consume around 215W of power. Ironically, it is only about 5%-15% faster than its competitor, the Radeon HD 5850 which is now going for $280 and consumes around 151W. The performance numbers I’m quoting are from reviews by respected online publications, though they are more centered on performance of today’s games, with usually little or no anti-aliasing added. If you crank up the anti-aliasing, the NVIDIA GPUs don’t seem to take as much big a hit as the AMD chips, though both struggle to retain playable frame rates at higher resolutions.

My Perceived Value Formula

Those price points easily pin the two GPUs in the Enthusiast segment, where money usually isn’t a major issue. So lets try to quantify the perceived value of the new graphic cards with the help of a simple mathematical model. We will take The Radeon GPUs as a reference point on our scale, with their value set at 100%. From there, we’ll compare the Price, Performance and Power of the NVIDIA cards with their ATI competitors. I’m trying to come up with a technique here that would help you calculate how much of a value advantage these new GPUs have. I’ll base my calculations on how I perceive the value – feel free to come up with figures that apply to your own taste.

  • Performance: For every additional percentage in performance, we’ll give the GPU four points. So 10% more performance means 40 points.
  • Price: For every additional dollar in price, we’ll take away half of a point. So for $20 higher price would mean –10 points.
  • Power: For every additional watt in power consumption, we’ll take away one point. So 20W more in TDP would mean –20 points.

Since this evaluation technique is very subjective, I’ll try to be fair and take the maximum performance number being reported for NVIDIA’s new GPUs. So for GeForce GTX 480, we have:

Perceived Value = 4 x Performance Advantage – Power Disadvantage – 0.5 x Price Disadvantage

Perceived Value = 4 x 20% – 62 – 0.5 x 80 => –22

Starting with a base score of 100, a –22 final score would mean a total of 78 points. Similarly, for the GeForce GTX 470:

Perceived Value = 4 x Performance Advantage – Power Disadvantage – 0.5 x Price Disadvantage

Perceived Value = 4 x 15% – 64 – 0.5 x 70 => –39

Which gives us a final score of 61 points. The following figure sums the above calculations up. One thing I would like to make clear is that the figures only take into account the performance in today’s games, and don’t take other features like GPU computing and 3D Vision Surround into consideration. Also I would love some (positive/negative) feedback on this whole quantification process.

I would like to mention one thing though – the GeForce GTX 480 isn’t utilizing the full potential of its Fermi core yet, and that’s due to the problems with TSMC’s 40nm manufacturing process which forced NVIDIA to disable 32 of the 512 shader cores from the processor. And remember yield problems caused the Radeon 5800 series prices to soar high above their suggested price points, so expect the same here as well.

Conclusion

But behind every dark cloud, there is a silver lining right? Well kind of. At least in the case of GeForce GTX 400 series, it is yet to be realized. The real power of the new Fermi architecture lies in General Purpose GPU computing, and gaming perks like PhysX, 3D Vision Surround and Advanced Tessellation. Unfortunately, these are little used features which only a handful of games ever tend to support, though NVIDIA is dead set on changing that. Still some of these features would still require deep pockets, like 3D Vision Surround. For a typical setup, you would need a pair of GTX 400 series GPUs in SLI, three 120Hz, preferably Full HD monitors, and of course the NVIDIA 3D Vision Kit, which can easily cost around $2700. Compare that with a three monitor Full HD Eyefinity setup you can easily get away with around $600-$750.

Applications (read: Games) realizing the full potential of the Fermi architecture would definitely start rolling out by the end of 2010, so you would think that NVIDIA would be at an advantage then right? Well my honest answer is maybe. If AMD keep on track with their current schedule then they would be launching the Northern Island (Radeon HD 6000) architecture around that time, which would be something entirely new unlike Evergreen (Radeon HD 5000) which was essentially an upscaled version of RV780. The new architecture is said to be based on a 28nm process which means increased efficiency and lower heat levels. So any real performance advantage NVIDIA might gain by that time may very well be negated by the new AMD lineup. And lets not forget, AMD would definitely be cutting the prices on the current GPUs once TSMC’s 40nm process starts to get acceptable yields, giving the GeForce lineup a real run for their money.

I’ll close up the article by summing the entire story in a line: “Indeed the Fermi architecture is an ambitious effort with a lot of potential. Unfortunately NVIDIA have utilized that potential to to target the wrong market (read: High Performance Computing) instead of gaming.”

Advertisement
Share on Facebook Share on Twitter Share on Reddit