In early 2018 Microsoft acquired Playfab, a company that provided tools for back-end games support via the cloud.
Two years on, Playfab founder James Gwertzman is still working as the General Manager of the team, which leverages the power of Microsoft's Azure network. How does Playfab help game developers, though? Gwertzman shared some insights on that in a lengthy interview with the press, transcribed by GamesBeat.
For example, he revealed that one of Microsoft's internal Xbox Game Studios is attempting to use Machine Learning models to upscale textures in real time through AI. Apparently the results are so identical to the native assets that low resolution textures could be shipped and simply upscaled on the fly.
One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real-time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled-up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it.
Like literally not having to ship massive 2K by 2K textures. You can ship tiny textures. The download is way smaller, but there’s no appreciable difference in game quality. Think of it more like a magical compression technology. That’s really magical. It takes a huge R&D budget. I look at things like that and say — either this is the next hard thing to compete on, hiring data scientists for a game studio, or it’s a product opportunity. We could be providing technologies like this to everyone to level the playing field again.
In this case, it only works by training the models on very specific sets. One genre of game. There’s no universal texture map. That would be kind of magical. It’s more like if you train it on specific textures it works with those, but it wouldn’t work with a whole different set.
It’s especially good for photorealism, because that adds tons of data. It may not work so well for a fantasy art style. But my point is that I think the fact that that’s a technology now — game development has always been hard in terms of the sheer number of disciplines you have to master. Art, physics, geography, UI, psychology, operant conditioning. All these things we have to master.
That’s where I come in. At heart, Microsoft is a productivity company. Our employee badge says on the back, the company mission is to help people achieve more. How do we help developers achieve more? That’s what we’re trying to figure out.
It would be an amazing tool for indie developers, no doubt. We've already seen the wonders of upscaling low-resolution textures of older games through the AI-based ESRGAN (Enhanced Super-Resolution Generative Adversarial Networks) model, but to do that in real time would be even more impressive.
This also makes us wonder what else Microsoft could be doing when it comes to Machine Learning and gaming. We know they've released the DirectML API in Spring 2019, for instance, citing Super-Resolution as one of the possible uses in gaming. There's already an example on PC with NVIDIA's Deep Learning Super-Sampling (DLSS) technology, which exploits the Tensor Cores available in Turing GPUs, whereas AMD previously revealed to be experimenting with DirectML to achieve similar results. Since the Xbox Series X is powered by AMD hardware and Microsoft's own DirectX API, it is conceivable that the new console might support this AI-based technology.