⋮    ⋮  

Rev Lebaredian on NVIDIA GameWorks and How It Will Help Democratize Game Development

Author Photo
Nov 28
13Shares
Submit

It’s been a while since we first interviewed Rev Lebaredian, NVIDIA’s Vice President of GameWorks and Lightspeed Studios.

Earlier this year we started talking with the fine folks at NVIDIA to try and set up another interview. It took quite some time to coordinate efforts, but we made it and our new NVIDIA GameWorks chat with Rev Lebaredian is now finally available to you in both written and video form. Either case, it’s going to be a fairly long (though hopefully interesting) dive. Enjoy!

nvidia-titan-v-gallery-dRelatedFirst NVIDIA TITAN V Volta Graphics Card Gaming Benchmarks Arrive – Much Faster Than A 2.6 GHz GTX 1080 Ti, Blows Away All Previous GeForce Cards

Why don’t you introduce yourselves to the readers who aren’t familiar with your work?

Sure. I’ve been at NVIDIA GameWorks for about fifteen years now and during my time here I’ve worked on primarily gaming technologies. In recent years, we’ve been not only developing technologies for games in specific but anything that can benefit from the same core technologies, like VR, AR and now robotics as well.

Can you give us a brief overview of how things have progressed at GameWorks since our last chat?

nvidia-titan-vRelatedNVIDIA TITAN V Volta GPU and HBM2 Memory Powered Graphics Card Announced – 5120 Cores, 12 GB HBM2 VRAM, $3000 US Price

We’re really happy with what’s been happening with gaming in general and our part in it. We’ve been increasing our investment in developing gaming technologies through our GameWorks crew and, in the process of doing so, we’ve extended the reach of our technologies.

Not only are there more games than ever using it, not just big triple-A titles but also indie titles and different games in interesting genres that are popping up, but the same technologies are being used in new modes of immersion – VR, AR. We’re also using it for simulation programs, non-gaming purposes. We recently announced our project in robotics, Project Isaac, we’re building a robotics simulator derived from the Unreal Engine 4 and that incorporates all of our Gameworks technologies as critical components of doing this.

We’ve also been hard at work on advancing in the field of deep AI. We were looking at various ways of enabling game developers with specific technologies that are derived from AI. One of the most interesting areas is content creation. A big problem that we’ve identified in game development is that the cost is so high in terms of developing large worlds and content, so a key problem we’re trying to attack these days is how we enable game developers to create much richer and larger worlds with the same resources that they have, leveraging all the AI tech simulation that we’re bringing into the table.

We were one of the first companies to see the potential of AI and a big part of that is because it was happening on our platform first.

In 2012, when AI techniques started creating records on computer vision, it was all done our platform, on CUDA and NVIDIA hardware. We saw it early on and we’ve been investing in it ever since; the way we look at it, deep AI is not a field by itself, it’s enabling technology for everything.

This past GDC, we’ve released a few of the technologies we’ve been working on as a sort of taste of what’s to come. Three of them are available for anybody to try on our website. We have a technology to do super-resolution, to up-rez images and textures to four times the original size. It does an amazing job of filling in details that aren’t there in the original image based on what it’s learned from a large database of images on the Internet (ImageNet). We have a texture multiplier, another key technique that’s necessary for constructing large and complex worlds. Painting textures that aren’t repeating in a large world like Grand Theft Auto’s or The Witcher is laborious and it’s a very manual task, so we have a technique that will take a small snippet of your texture and create unlimited amounts of variations of that. We have a photo-to-material technology too, where you can take two pictures of some material, say rug or wood, and then feed those into our algorithm and it will automatically generate all of the texture maps you need with the parameters and the lighting model to reproduce that same material exactly the way it is in real life.

At this past SIGGRAPH, we’ve also shown techniques for using deep AI for streamlining and enhancing facial animation. We worked with Remedy to take their state-of-the-art, best-in-class gaming facial capture rig that involved nine cameras that had to be calibrated just right (and a whole bunch of post-processing after you’ve done the capture). We streamlined that by turning it into a much-simplified process where you can take one image, one video stream of the face, run it through our neural network and you get the same facial animation out of it.

We then took it a step further. Instead of going from video to facial animation, we now have the ability to go from audio, from the voice, directly to facial animation which opens up a lot of possibilities. You can potentially create new scripts for your characters in different languages when you want to localize it and have it automatically generated, without having to have an army of animators do all of the animation for every character inside your game.

That’s pretty impressive, as it surely is one of the biggest undertakings in game development.           

We’re just at the beginning. This kind of technology is going to democratize game development. We’ve been seeing this trend for years now where people take existing games and game engines and modify them to make new types of games, with whole new genres coming out of these things. Dota came from Warcraft 3 and spawned the whole MOBA genre starting as a mod.

What gamers really crave is the ability to actually create their own worlds, their own games within these rich environments. What they’re missing is an army of artists and engineers who can do the specifics. What AI is going to allow us to do is not only take the traditional game developers and really supercharge the types of games and scale of games that they can build, but it’s also going to further the democratization of game development so that anybody can start creating their own rich worlds.

Minecraft is probably the best example. By itself, it’s not really game, more like a platform where people can create their own worlds and mini-games. I mean, Minecraft is great, but ultimately I think people want more than just 8-bit style graphics – they want to be able to create super realistic worlds. The only thing between a kid playing Minecraft doing that is all of the technical know-how necessary and the artistry, but the imagination is there, the people know what they want to create. We want to move towards that. It’s going to be a little while but we’re taking steps towards it.

Usually, we’ve seen mostly AAA games implementing GameWorks technology. However, indie studio Theory Interactive announced they’ll be using WaveWorks and PhysX in their game RESET while EXOR Studios is using PhysX for X-Morph: Defense. How did the partnership with them come about? Can we expect to see more indie games using GameWorks going forward?

We’ve always worked with developers of all sizes. When we see a game that’s cool, we don’t care how big the developer is, we’ll work with anybody. If you look back historically, there have been developers of different scales that we’ve worked with.

We have probably the best developer relations team in the world within the gaming space. We’ve been building this team for well over a decade now, we’ve got guys that have been part of NVIDIA doing this and this alone for the whole time. There’s virtually no game developer out there that we’re not talking to, and we’ve built trust with all of them.

The way it usually works is we keep these conversations going, we chat with them at various events and game shows like GAMESCOM, E3 et cetera, but outside of that we also keep our communications line open with them. Often, they’ll see something cool we’ve done with some other game developer or something we’ve announced in our own demos, they come to us and say ‘Hey, we want that in our game, it’d be perfect for what we’re trying to do!’

It happens kind of organically. We don’t try to push our technology into a game developer who doesn’t want it, we generally wait for them to ask us. Sometimes they’ll come to us and say ‘Hey, what do you have that’s not public yet?’ and we’ll take a look at their game and say, ‘You know, we have this one thing we’re working on, it would be perfect for what you guys are trying to do’, and then it goes on from there.

The most recent GameWorks tech demo showcased Flow running on DirectX 12. Is that an indication that you’ll eventually add DX12 (or Vulkan) support to the rest of the GameWorks library?

The plan is to support any API that game developers care for. In an ideal world, every one of our GameWorks technologies would run on any API, on any OS and on every piece of hardware, no matter how big or small. The truth is that sometimes that’s technically not possible but also that despite the fact that we have a lot of resources, they’re not unlimited. We can’t support everything well, so we prioritize which technologies we move to specific API based on developer demand.

The two that were in big demand, talking to all of the game developers that we’ve got relations with, recently were Flex and Flow. Flex is our particle based, position based dynamic physics solver that lets us do a unified simulation of various types of physical phenomena; you can do rigid bodies, fluids, soft bodies all combined in one. It’s a really cool technology – when we first developed it, it was on CUDA because it required a level of sophistication that other APIs didn’t provide yet. But the demand for it was high and people wanted to run it with DirectX, so we ported it to DirectX 12 and released that.

Flow is the other one. It’s sort of our third generation of voxel based fluid solvers. Previously we had Turbulence, which was based on CUDA, but with this version, we decided ‘Alright, a lot of game developers want it and they want to run it on DirectX’, so we ported and released that too.

Over time, we plan to support DirectX 12 with every single GameWorks module if it’s possible and as game developers request it.

With regards to Ambient Occlusion solutions, we’ve only seen the high-quality VXAO in Rise of the Tomb Raider so far and we’ll see it next year in Final Fantasy XV Windows Edition. Is that because it’s too taxing for hardware? Should we expect to see mainly HBAO+ support in the future, and are you still working to improve the HBAO+ technique?

Other than those two, there are other games that I can’t talk about that are potentially going to use VXAO, but it’s true, VXAO is a lot more expensive computationally than screen space methods like HBAO+.

We’ve known all along that screen space is not ideal as it can’t give you shadows on things that aren’t in view in the camera, they don’t give shadows right behind the first shell of things you see in the camera, etc.

VXAO solves all of that, but of course, there’s going to be a computational cost. The way we work is we look at what the future of gaming and computer graphics way out, way before anybody else starts thinking about it because we have to decide where we’re going to place our bets when we design our architecture. When we created a GPU architecture, we invest billions of dollars in the development of it and it starts early, 3-4-5 years out. My team, the GameWorks guys, is often asked by the architecture guys what’s coming and where should NVIDIA put the transistor budget to take it to the next level.

We try to identify these things. VXAO and VXGI were possible because of specific features we added to the Maxwell architecture. We were under no illusion that Day One, when we released this thing, that there would be a hundred games out there using it because it’s cutting edge and requires modifications to your engine and the cost is high. But if we don’t do that, if we don’t put it out there for people to start experimenting and playing with, it’ll never end up in games. We’re playing the long game, we developed these technologies trying to envision what we want to see in terms of computer graphics years from now, and then we put our money where our mouth is.

We developed not only the hardware for it but the layers of software and even up to the integration in popular game engines so that game developers can benefit from it whenever they’re ready.

The volumetric lighting in Bethesda’s Fallout 4 looked great. Are you seeing other developers interested in using it for their games?

We are, but unfortunately I can’t talk about unreleased games and the technologies in them. Sorry!

No problem. What’s next for TXAA? Will it remain your main anti-aliasing solution within GameWorks for the foreseeable future?

Anti-aliasing, in general, is a core, fundamental problem in computer graphics that will truly never be solved, because fundamentally what we do in computer graphics is try to create a discrete representation of something that’s actually continuous in the world.

The real world doesn’t have pixels, so we’re forced to do this and whenever we start sampling, converting continuous functions into discrete ones, you’re going to have aliasing, there’s no way around it. In the real-time world, what we’re always trying to do is find a set of trade-offs that we can make, to get performance to the right place but maintain quality.

As we’ve gotten more and more horsepower to render, it seems like we’re still not willing to do more aggressive types of anti-aliasing. We take that budget and we apply it to other things to add more complexity. So, we’ve created a bunch of hacks for various AA techniques, which is fine as all of computer graphics is a series of hacks.

The real world is very, very complex and the physics of life is virtually impossible to simulate exactly given the computational power available to us. So what we do is we do approximations and TXAA has been a very successful one as far as a tradeoff between quality and performance. It’s by no means the end, we continue to invest in more and diverse anti-aliasing techniques. We can see a potential for using AI in this area as well; this past SIGGRAPH we showed some techniques in the ray tracking world on denoising, which is a related problem. There’ll be a lot more to come.

Multi-res Shading is a very interesting technology derived from VRWorks. We’ve seen it used in Shadow Warrior 2 with great success outside of VR, though. Is that something you think will be more prominent, with developers trying to save performance on 4K resolutions with techniques like checkerboard rendering?

On the Multi-Res Shading front, that’s a really interesting sort of path that we started with Maxwell. The architecture allowed us to do this rendering at different resolutions in different portions of the screen, but it was somehow limited to a 3×3 grid. We enhanced that with the Pascal architecture and we really feel, given the resolutions that we have to go to now (4K, there’s 8K coming up on the horizon) as well as what VR requires now in terms of richness and complexity, there’s no way to do that well without having some way to vary the sampling rate across your image.

If you render everything at the full resolution, you’re wasting a lot. So we continue to invest there and you’ll see in our future architectures that it’s going to get better, both in terms of quality and also easier for developers to integrate into their pipelines.

The idea comes from all the research that’s going on on the VR side, that’s called foveated rendering, where you take more samples in the region of interest. In an ideal world, we would actually know what you’re looking at if there was some way for us to track your gaze and know what part of the image you’re looking at, we could increase the resolution in that area and decrease it in the areas that are in the periphery. It’s not hard to imagine that sometime in the not too distant future you’ll have your webcam or some device actually tracking your gaze and you get the effective resolution of 8K, but rendered with a fraction of the number of samples to get there.

Is there anything else you’d like to share about NVIDIA GameWorks with NVIDIA fans and PC gamers as a whole?

All the work that we’ve been doing is now so critical in so many other important areas. With VR, autonomous vehicles, robotics starting to happen, what we’re finding is all of this technology we’ve built for the enjoyment of gamers is actually useful for a much wider set of things.

What’s exciting to me is that not only is gaming going to help these fields, but I feel like it’s going to circle back. For example, we’ve been investing in physics for gaming for well over a decade now and we continue to increase our investment there, we’ve always felt that as important as graphics is we need the behavior for things in these virtual worlds we’re creating to feel realistic and be correct.

But, at some point with games and game development you can reach a sort of ‘good enough’. We don’t have to be completely accurate with our physics because some cheats are okay, we’re willing to deal with things like interpenetrating objects for the sake of performance. In the early days, it was absolutely necessary for performance, we just didn’t have the computing power to do it right, but now we have kind of accepted it in the gaming world as normal and there hasn’t been a lot of motivation to fix these things.

However, as we take these gaming technologies and move them to new spaces like robotics or VR, these cheats aren’t acceptable anymore. That’s been forcing us to invest in taking physics in our gaming technologies in general to another level and I feel like it’s wonderful for gaming, because all this work that we’re going to do and we probably wouldn’t have done for some time is going to filter back into games and just enhance the experience overall, in a sort of virtuous circle.

Thank for your time.

Submit