The DirectX 12 standard and its specification are both tricky things to fully understand at the best of times, but with companies throwing around things like "full DirectX 12" support - it gets even more complicated. Nvidia has been marketing the GM200 as the first GPU with full DirectX 12 support, while AMD has been offering Resource Binding at Tier level 3. Intel has remained mostly silent on the subject but their GPUs have featured Raster Order Views at Feature Level 11_1 for a long time. So who exactly has the mythical maximum possible DirectX 12 support down to the last digit? The correct, technical answer is: no one.
Does Nvidia's feature level 12_1 trump AMD's resource binding tier 3 and async shaders?
Before we get into the nitty gritty details, lets differentiate between APIs, Feature levels and HW specifications such as Resource Binding. DirectX 12.1 is an API or an Application Programming Interface. It is simply put, code that forms a bridge between the GPU and any end user software. Everyone is thoroughly excited about the DirectX 12 API - because its low level capabilities are a huge upgrade over its predecessor. Low level access, as most of our readers know, means the ability of the API in question to access parts of the GPU directly. Now, we come to feature_levels. Feature levels are pre defined standards of GPU hardware capabilities and have almost nothing to do with the API in the very strict sense. The DirectX 12 API requires graphics hardware with support for Feature Level 11_0 at the very least to run (defined below). But even after a new Feature Level is defined, many old GPUs and graphics architectures can still qualify for that feature level. For example a previously Feature Level 11_1 graphics card may very well meet all the requirements to fully support Feature Level 12_0.
A feature level will however, usually require a similarly named API to access its features in their entirety. . So basically, all GPUs with FL 11_0 to 12_1 support can run DirectX 12 API completely and fully. The much hyped about advantage that is the reduction of CPU overhead - everyone will get that (provided you fall in the FL 11_0 to 12_1 band). The thing is however, these new GPU had new hardware features, something that only the DirectX 12 API can finally access: so new standards had to be created: namely FL 12_0 and 12_1. Before we go any further, given below is a short summary of the requirements for feature level qualification:
FL 11_0: Supports Resource Binding Tier 1 and Tiled Resources Tier 1
FL 12_0: Supports Resource Binding Tier 2, Tiled Resources Tier 2 and Typed UAV Tier 1
FL 12_1: Conservative Rasterization Tier 1 and Raster Order Views (ROV)
Now that we know what the definitions are, here is the complete specification table of all IHVs with released hardware (including the latest Skylake iGPU and GM200):
IHV Hardware Specification Comparison
|Architecture||Haswell||Broadwell||Skylake||GCN 1.0||GCN 1.1||Fermi||Kepler||Maxwell 1.0||Maxwell 2.0|
|Raster Order Views||Yes||Yes||Yes||No||No||No||No||No||Yes|
|Typed UAV Formats||0||0||1||1||2||0||0||1||1|
|Feature Level Specification||11_1||11_1||12_0||11_1||12_0||11_0 + Partial 11_1 support||11_0 + Partial 11_1 support||12_0||12_1|
So here is the thing. Maxwell 2.0 (GM200) has the hardware characteristics necessary to get the 12_1 stamp, so it does. However, AMD's GCN actually has Resource Binding Tier 3 for a very long time now, not to mention Typed UAV Format Tier 2 and Asynchronous shaders for parallel functions. Similarly, Intel has supported Raster Order Views since Haswell iGPUs and has been rocking it on Feature Level 11_1. To put this into perspective Nvidia's architectures supports ROV only after the GM200 Maxwell. You can clearly see that no hardware vendor has the undisputed best GPU hardware specification around. Every IHV has a weakness or missing specification in some form or the other. So who exactly has the best relative specification all things considered? This is where it gets really tricky and also unanswerable, mostly unanswerable.
So which IHV specification is better?
The question we can however answer is: which specification (or lack thereof) will actually translate to an increased (or decreased) gaming experience at the end of the day? Here, the answer is relatively simpler to explain. Lets start with AMD's edge. Resource Binding is basically, the process of linking resources (such as textures, vertex buffers, index buffers) to the graphics pipeline so that the shaders can process the resource. This means that AMD's architecture is mostly limited only by memory and while this is a desirable trait, it is something that will happen out of sight, without translating to anything a gamer can observe on-screen. Similarly Typed UAV formats isn't something an end user can observe. Currently there isn't a fully developed ecosystem for these and only when VR becomes mainstream will these affect anything but a very small minority. Asynchronous compute shaders however, is a performance enhancing feature so the benefit is not strictly based on new visual effects but on improved performance.
I am not going to go into detail about Intel primarily because its not really a competitor to Nvidia or AMD. It has supported Raster Order Views since the Haswell days (fulfilling one half of the requirement for 12_1) and now with Skylake it also boasts full DirectX 12 API support with the Feature Level 12_0.
Finally, we come to Nvidia. Nvidia has something that no other IHV currently has: and that is Conservative Rasterization. While the qualifying requirement is only Tier 1, GM200 has Tier 2 Conservative Rasterization support. Here is the thing however, Conservative Rasterization is a technique that samples pixels on screen based on the primitive in question and is much more accurate than conventional rasterization - in other words, it will make a difference to the end user in the form of special graphical effects. Conservative Raster itself will give way to many interesting graphical techniques - Hybrid Ray Traced Shadows for one. We can conclude therefore that Nvidia's relative edge is something that will actually affect the average gamer's experience.
An example of conservative raster based effects in the form of Nvidia's Mech Ti feature demo is embedded below:
TL;DR: So summarizing, all IHVs fully and completely support the DirectX 12 API. No hardware vendor can claim 100% support of all specifications and the differences are usually negligible in nature. That said, if one is deciding by features observable by the end user and gaming experience, the slight advantage and edge goes to Nvidia with its Feature Level 12_1 support. Keep in mind however, that developers usually code for the lowest common denominator, which means Nvidia's edge depends entirely on how many devs use it.