NVIDIA Next-Gen GPUs Can Make Use of AI To Enable Much More Realistic, Real-time, AI Powered Hairworks Models in Upcoming AAA Titles
One thing that pushes gamers to buy new graphics cards is that they enable a new generation of graphical feature support in the latest AAA titles. Both NVIDIA and AMD have been working with several game developers to incorporate their latest graphics features in AAA titles and both do a pretty good job to keep PC gaming visually distinct and better than its console counterparts. It’s always a race of who got the best features in a major shipping AAA title and who runs it well and looks like researches are already working on the next big thing.
Researchers Find Use of AI in Powering New Level of Realistic and Life-Like Hair Models, NVIDIA Could Make Use of Similar Technology For Next-Gen HairWorks
The latest report comes from PCGamesN who have found out that researchers at the University of South California, Pinscreen, and Microsoft are currently developing a new hair rendering method which is powered by AI (Deep Neural Networking). The 3D models are referenced from a 2D image and are said to work in real-time which is a feat in itself.
“Realistic hair modeling is one of the most difficult tasks when digitizing virtual humans,” the researchers say. “In contrast to objects that are easily parameterisable, like the human face, hair spans a wide range of shape variations and can be highly complex due to its volumetric structure and level of deformability in each strand.”
“Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy.”
“The hair from our method can preserve better local details and looks more natural,” the researchers say. “Especially for curly hairs”. via PCGamesN
While rendering realistic hair in gaming can be a demanding task, the devs believe that AI and neural networks can solve this high resource demand. The network was given a data set of 40,000 different hairstyles and a total of 160,000 2D orientation images. In just a couple of milliseconds, the network was able to create hair rendered in 3D, and that too in a set of different styles, colors and lengths as perceived by the 2D orientation image. The devs have notified that it is not a perfect system currently, but with repeated training and expanding the dataset, there could be a potential use of this technology.
Coming to real-world usage, both NVIDIA and AMD have their own hair rendering techniques. The green team uses their HairWorks technology which is part of their GameWorks SDK. The red team uses their TressFX renderer. Both technologies have been used in a wide list of AAA gaming titles but both have a serious impact on gaming performance. NVIDIA’s HairWorks in general drops the performance down hard while being visually more impressive and applicable to a broad range of environmental models. Some of the biggest titles to use NVIDIA’s HairWorks technology include:
- The Witcher 3: Wild Hunt
- Far Cry 4
- Call of Duty: Ghosts
There’s another application for the AI-powered renderer when it comes to NVIDIA GPUs, that’s turf effects. I am pretty sure that if AI can render and simulate hair strands, it can also be used to do the same for grass. As we dig deeper, we can see more uses of DNN for a range of NVIDIA technologies.
A few months ago, Jensen Huang, CEO of NVIDIA, implied the use of AI in future video games. The talk was related to the use of Tensor Cores in Volta powered NVIDIA GPUs.
And I think I already really appreciated the work that we did with Tensor Core and although the updates they are now coming out from the frameworks, Tensor Core is the new instruction fit and new architecture and the deep learning developers have really jumped on it and almost every deep learning framework is being optimized to take advantage of Tensor Core. On the inference side, on the inference side and that’s where it would play a role in video games. You could use deep learning now to synthesize and to generate new art, and we were demonstrating some of that as you could see, if you could you have seen some of that whether it improves the quality of textures, generating artificial, characters, animating characters, whether its facial animation with for speech or body animation.
The type of work that you could do with deep learning for video games is growing. And that’s where Tensor Core to take up could be a real advantage. If you take a look at the computational that we have in Tensor Core compare to a non-optimized GPU or even a CPU, it’s now to plus orders of magnitude on greater competition of throughput. And that allows us to do things like synthesize images in real time and synthesize virtual world and make characters and make faces, bringing a new level of virtual reality and artificial intelligence to the video games. via NVIDIA
What Jensen is saying is what we have already mentioned above. There’s an application of AI in rendering and simulating realistic effects in games. NVIDIA has GPUs with Tensor Cores that can power these AI heavy workloads, saving time and costing little to no performance for realistic visuals. Sure, it may just be a developer thing which can be used to make these experiences look good for gamers but I would personally love to see a new generation of graphical features being introduced on the upcoming graphics cards for gamers.