NVIDIA GeForce VRSS Is A Welcome Advancement For Virtual Reality Enthusiasts


Along with NVIDIA's CES driver release they delivered a new and exciting way to take advantage of Turing Variable Rate Shading capabilities and targeting VR enthusiasts. But Variable Rate Super Sampling (VRSS) is an excellent addition but how did we get here? It has been an interesting road to travel down watching how developers of games, head mount displays, and graphics card engineers have all been working towards a better VR experience.

One of the earlier techniques to be put into use with excellent results was Foveated Rendering which renders the center of the screen at the native resolution while reducing the rendering resolution around the edges so that a large portion of the rendering pipeline is opened up to ensure that visual details could stay high where you were already focused at the expense of the edges taking a hit. NVIDIA did manage to take this technology to a flat-screen experience with Shadow Warrior 2 a couple of years ago, allowing for a very easy and straightforward example of how it works.

Tier 2 VRS Further Improves Performance in Gears 5 and Gears Tactics

For those following along the VR scene, you'll notice this is exactly what Oculus has done with the Quest. Sure it keeps the center crisp but really takes a dive when you start looking around, that's where Dynamic Foveated Rendering comes into play. That technology is coming and already being shown off by companies like Pimax, but we're still waiting for that to come to fruition.

Something to take note of with VR games at this point is that they're mostly designed around the 'entry-level' performance class for VR requirements, which lands in the GTX 970 and R9 290 level of performance. This basically means that having more power doesn't necessarily equate to a better visual experience, until now.

AMD RDNA2 – Support for Raytracing & Variable Rate Shading

This is where Variable Rate Super Shading comes into play. This is something that NVIDIA has been working on themselves for the benefit of VR gamers. Before we get into the grit of it a quick and dirty explanation is to think of it as Reverse-Foveated Rendering where the edge of the screen is rendered at the native resolution and the center of the screen is given the supersampling treatment to crispen the image, and it works.

The image that NVIDIA provided with their explanation for Variable Rate Supersampling makes it seem straightforward enough and it would be easy for thinking this the image on the left is just how it works, but VR isn't that simple. There are many different HMDs that range in various refresh rates, and that refresh rate is key to the experience. Regardless if an HMD runs at 80, 90, or 120Hz it HAS to maintain that frame rate in order for the perception to be butter smooth, responsive and not vomit-inducing. The catch is that is a fixed timing interval so what does your GPU do when it's sitting between frames? Up until now, nothing.

For the sake of simplicity, we'll use the Rift S as an example. With the Rift S, you have a single fast-switching LCD panel with a total screen resolution of 2560x1440 split between both eyes and a refresh rate of 80Hz making it fairly easy to drive by graphics cards, and that results in 12.5ms frame intervals. Let's say your graphics card, something like the RTX 2080, is able to output most frames at a rate of 120 FPS which is 8.3ms, now you're looking at a 4.2ms window of waiting around. The idea is to take that additional time you have for rendering the frame, starting at the center, and Super sample the image up to 8x as far out from the center as possible before the time runs out. Sometimes this could end up being a very small section of the screen or fill the entire available space with a much more crisp image. This means a heavier load on your GPU but it is to the benefit of the overall experience. And because it's variable and based on the idea of time to work then the higher end your graphics card is you finally get the ability to have a higher quality VR experience as well.

To enable VRSS, open the NVIDIA Control Panel and select Manage 3D Settings, then scroll to Virtual Reality – Variable Rate Supersampling, and change the setting to “Adaptive”.

The great thing about VRSS is that it is supported through the driver and requires nothing on the game end, so I hope to see the adoption of this spread quickly. It does, however, require the game to have forward renderers and support for MSAA. While my VR library is still rather small I was able to grab up Spiderman: Homecoming - Virtual Reality Experience to see if I could tell the difference...yeah, I could and it wasn't hard to spot the improvements, I can't wait to see VRSS spread more widely. NVIDIA has been testing this internally and so far over 20 games have met their criteria and are supported at this time.

VRSS Game Support At Time Of Writing

  • Battlewake
  • Boneworks
  • Eternity WarriorsTM VR
  • Hot Dogs, Horseshoes and Hand Grenades
  • In Death
  • Job Simulator
  • Killing Floor: Incursion
  • L.A. Noire: The VR Case Files
  • Lone Echo
  • Mercenary 2: Silicon Rising
  • Pavlov VR
  • Raw Data
  • Rec Room
  • Rick and Morty: Virtual Rick-ality
  • Robo Recall
  • SairentoVR
  • Serious Sam VR: The Last Hope
  • Skeet: VR Target Shooting
  • Space Pirate Trainer
  • Special Force VR: Infinity War
  • Spiderman: Far from Home
  • Spiderman: Homecoming – Virtual Reality Experience
  • Talos Principle VR
  • The Soulkeeper VR