Multi GPU Technology Analysis – Nvidia SLI and AMD CrossFire Scaling, Frame-Time and Value Comparison

Usman Pirzada
Posted 1 year ago

Multi GPU configurations are part and parcel of nearly every high end build. Unless the builder has decided to forgo the performance benefit of multiple graphic cards due to personal preference, you will find at least a dual SLI or Crossfire setup in many of the enthusiast builds. However, we are all aware, there are certain drawbacks associated with using more than one graphic card. These include issues from micro stuttering, to bad multi GPU profiles, to plain and simple diminishing marginal performance. Today, sourcing a post from IYD.KR by the well known DG Lee, we will take a look at how SLI and Crossfire compare to themselves, and to each other.

An indepth look at GPU scaling technologies of the current generation: Nvidia’s TITAN-X and 980 TI in SLI versus AMD’s R9 Fury X in CrossFire

If we take the performance offered by a single graphic card to be 100%, than a gamer employing 2 or more cards should logically expect gains of 100% (for a total of 200% etc) for every card added. Of course, graphic cards (currently) do not stack that way – not even close. The real gains are much more diminished in nature and the a plethora of complications is added to the mix (frame pacing being one of the recent examples). Stuttering used to be a problem in the initial era of the multi GPU, something which has now tricked down to problems associated with micro-stuttering. The primary problem, with any multi GPU technology (be it Crossfire or SLI), is that the GPUs do not logically act as a single, bigger GPU, but rather, they split the work between them. This translates to real world (micro) delays if the GPUs are not 100% in sync.

Before we go any further, given above is the reference benchmarks for this test. It consists of 9 games and one synthetic benchmark. All of these were tested at three different resolutions, namely 1920×1080, 2560×1440 and 3840×2160. Before we begin our dive, please note that this particular graph is in relative terms to the single GPU configuration of the particular IHV and not in absolute terms, to each other. The complete list is given below:

  • Alien Isolation
  • Battlefield 4
  • Bioshock: Infinite
  • Crysis 3
  • Hitman: Absolution
  • Metro 2033
  • Metro: Last Light
  • Middle Earth: Shadow of Mordor
  • Sleeping Dogs
  • Sniper Elite III
  • 3D Mark Firestrike
Rise Of The Tomb Raider DX12 ASync Compute AMD Performance Improved; NVIDIA Users Better Off Using DX11

Right of the bat, we can see that red outperforms green in every step. There are a total of 9 tests present here, at three different resolutions and three different multi GPU configurations and the AMD R9 Fury X wins every single one of them. I think this makes it crystal clear: AMD’s multi GPU scaling clearly has an upper hand over the Nvidia counterpart.  The Nvidia GTX 980 Ti and the Nvidia GTX TITAN-X scale almost identically (probably because its the same chip inside both products: the GM200 Maxwell die).  The diminishing marginal returns of every added GPU (to the existing setup) are also obvious. The scaling is pretty horrible on lower resolutions (such as 1080p and 1440p) but that is more or less expected (due to the performance being CPU bound); and let’s face it, if you are running  an SLI or a Crossfire setup you probably aren’t rocking an ageing 1080p monitor.

The ideal frame time (per GPU) should add up to 1. So if you have 1 GPU, that is exactly 1. If you have two GPUs, it should be 0.5 and so on. The problem is however, that as with the actual performance, frame time has a downward trend as well, and that is something that cannot be avoided. It can, however, be minimized and that is where each technology comes in. Given below is the comparison charts (including the Average Relative Performance and Frametimes) between AMD’s Crossfire Technology and Nvidia SLI Tech:


AMD GPU Scaling Nvidia GPU ScalingTables courtesy of DG Lee. @IYD.KR

WCCFTechNvidia Geforce GTX Titan X (SLI)AMD Radeon R9 Fury X (CF/xDMA)
MetricDeviation from Ideal ScalingDeviation from Ideal Frametime
Deviation from Ideal ScalingDeviation from Ideal Frametime
Single GPU0%0%0%0%
Dual GPU(-) 11%(+) 12%(-) 07%(+) 07%
Triple GPU(-) 21%(+) 26%(-) 13%(+) 15%
Quad GPU(-) 29%(+) 41%(-) 22%(+) 28%
A comparison of deterioration due to GPU scaling in Nvidia SLI and AMD Crossfire (over xDMA).
*Lower is better (in absolute terms).


Rise Of The Tomb Raider DX12 ASync Compute AMD Performance Improved; NVIDIA Users Better Off Using DX11

To make the impact of the information more clear, I have taken the liberty to calculate the deviations from the Ideal score. Once again, AMD’s multi GPU technology shines here. AMD’s deviation from the ideal frame-time and the ideal performance is very minimal at dual Crossfire configuration (both at approximately 7% form the ideal). As more GPUs are added, the score deteriorates, till the Quad GPU configuration is lagging approximately 22% behind the ideal scaling and 28% behind the ideal frame time. This means that instead of 1, the total frame-time (for all GPUs combined) is actually 1.28 – which will be the root cause behind any complications that result from Multi-GPU.

Nvidia variants fare worse. In dual SLI, we see that the card has is lagging 11% behind the ideal scaling and 29% at the top end (four GPUs). Frame-time is a similar story, with deviation ranging from 12% all the way up to a massive 41% (in Quad SLI). This means that if you are rocking  four of green’s cards, they will add up to a total frame time of 1.41 (instead of the ideal 1), which is a pretty massive drop, and slightly less than double of AMD’s number.

The obvious next question, when dealing with multiple GPUs in SLI or Crossfire, is of course, value. In the second half of this article, we will look at the value offered by the setups as well as investigate, why, AMD tends to have an advantage in multi GPU.

Next: Performance Per Dollar (Per GPU) Comparison and XDMA Analysis
Share on Facebook Share on Twitter Share on Reddit