Exclusive: The Nvidia and AMD DirectX 12 Editorial – Complete DX12 Graphic Card List with Specifications, Asynchronous Shaders and Hardware Features Explained


Direct3D 12 Feature Levels and Capabilities Overview

Before we get into the nitty gritty details, lets differentiate between APIs, Feature levels and HW specifications such as Resource Binding. DirectX 12 is an API or an Application Programming Interface. It is simply put, code that forms a bridge between the GPU and any end user software. Everyone is thoroughly excited about the DirectX 12 API - because its low level capabilities are a huge upgrade over its predecessor.

Low level access, as most of our readers know, means the ability of the API in question to access parts of the GPU directly. Now, we come to feature_levels. Feature levels are pre defined standards of GPU hardware capabilities and have almost nothing to do with the API in the very strict sense. The DirectX 12 API requires graphics hardware that conforms to Feature Level 11_0 (at the very least). But even after a new Feature Level is defined, many old GPUs and graphics architectures can still qualify for that feature level. For example a previously Feature Level 11_1 graphics card may very well meet all the requirements to fully support Feature Level 12_0.

The various DirectX 12 feature levels

A feature level will however, usually require a similarly named API to access its features in their entirety. So basically, all GPUs conforming to FL 11_0 through FL 12_1 can run the DirectX 12 API completely and fully. The much hyped about advantage that is the reduction of CPU overhead - everyone will get that (provided you fall in the FL 11_0 to 12_1 band). The thing is however, these new GPU had new hardware features, something that only the DirectX 12 API can finally access: so new standards had to be created: namely FL 12_0 and 12_1.

Graphic cards supporting the following feature levels can run DirectX 12. The qualifying requirement for the particular feature level itself is also given:

  • FL 11_0: Supports Resource Binding Tier 1 and Tiled Resources Tier 1

  • FL 12_0: Supports Resource Binding Tier 2, Tiled Resources Tier 2 and Typed UAV Tier 1

  • FL 12_1: Conservative Rasterization Tier 1 and Raster Order Views (ROV)

Now that we know what the definitions are, here is the complete specification table of all IHVs with released hardware (including the latest Skylake iGPU and GM200):

IHV Hardware Specification Comparison

WCCFTechAMD  Nvidia   Intel  
Architecture GCN 1.0GCN 1.1GCN 1.2FermiKeplerMaxwell 1.0Maxwell 2.0HaswellBroadwellSkylake
Resource Binding 3331222112
Conservative Rasterization0000002000
Tiled Resources1221223112
Raster Order ViewsNoNoNoNoNoNoYesYesYesYes
Typed UAV Formats1220011001
Feature Level Specification11_112_012_011_0 + Partial 11_1 support11_0 + Partial 11_1 support12_012_111_111_112_0

So here is the thing. Maxwell 2.0 (GM200) has the hardware characteristics necessary to get the 12_1 stamp, so it does. However, AMD's GCN actually has Resource Binding Tier 3 for a very long time now, not to mention Typed UAV Format Tier 2 and Asynchronous shaders for parallel functions. Similarly, Intel has supported Raster Order Views since Haswell iGPUs and has been rocking it on Feature Level 11_1. To put this into perspective Nvidia's architectures supports ROV only after the GM200 Maxwell. You can clearly see that no hardware vendor has the undisputed best GPU hardware specification around.

Every IHV has a weakness or missing specification in some form or the other. So who exactly has the best relative specification all things considered? This is where it gets really tricky and also unanswerable, mostly unanswerable.

Hardware vendors DirectX 12 hardware specifications compared

The question we can however answer is: which specification (or lack thereof) will actually translate to an increased (or decreased) gaming experience at the end of the day? Here, the answer is relatively simpler to explain.

Lets start with AMD's edge. Since I will be tackling Async shaders on the next page, lets start with Resource Binding. Resource Binding is basically, the process of linking resources (such as textures, vertex buffers, index buffers) to the graphics pipeline so that the shaders can process the resource. This means that AMD's architecture is mostly limited only by memory and while this is a desirable trait, it is something that will happen out of sight, without translating to anything a gamer can observe on-screen. Similarly Typed UAV formats isn't something an end user can observe. Currently there isn't a fully developed ecosystem for these and only when VR becomes mainstream will these affect anything but a very small minority.

Asynchronous compute shaders however, is a performance enhancing feature so the benefit is not strictly based on new visual effects but on improved performance.

Intel has supported Raster Order Views since the Haswell days (fulfilling one half of the requirement for 12_1) and now with Skylake it also boasts full DirectX 12 API support with the Feature Level 12_0.

Finally, we come to Nvidia. Nvidia has something that no other IHV currently has: and that is Conservative Rasterization and Raster Order Views. While the qualifying requirement is only Tier 1, GM200 has Tier 2 Conservative Rasterization support.

Here is the thing however, Conservative Rasterization is a technique that samples pixels on screen based on the primitive in question and is much more accurate than conventional rasterization - in other words, it will make a difference to the end user in the form of special graphical effects. Conservative Raster itself will give way to many interesting graphical techniques - Hybrid Ray Traced Shadows for one.