Sweeney: 40 TFLOPS Can Render Photo-Realistic Dynamic Scenes, But Humans Require More Than Computing Power

Author Photo
Jul 16, 2016

Each passing year, we talk about new incredible looking games that slowly but surely bridge the gap between 3D real time graphics and photo-realism.

That’s mostly thanks to the increased computing power, which is often measured with TFLOPS (tera floating-point operations per second). How far are we from proper photo-realistic graphics, then, in terms of TFLOPS?

fortnite_artRelatedFortnite To Be Epic’s Most Successful Game Ever Soon; Several Times More Users Than Expected Came In

According to Epic’s Founder Tim Sweney, quite a bit. In an interview with Gamespot, he said that while static scenes without humans are not a problem for today’s hardware, it will take around 40 TFLOPS to render photo-realistic dynamic environments.

You know, we’re getting to the point now where we can render photo-realistic static scenes without humans with static lighting. Today’s hardware can do that, so part of that problem is solved. Getting to the point of photo-realistic dynamic environments, especially with very advanced shading models like wet scenes, or reflective scenes, or anisotropic paint, though…maybe forty Teraflops is the level where we can achieve all of that.

When you consider that the upcoming PlayStation 4 Neo seems to have 4TFLOPS and the Xbox One Scorpio is targeting 6TFLOPS, that’s still some years away (though the GTX 1080 already sports 9TFLOPS).

Humans are another story entirely, though, as everyone familiar with the uncanny valley hypothesis already knows. It’s not just a matter of computing power, but they need to behave and react in a believable way; for that, Sweeney reckons we’ll need some sort of algorithm that’s still decades away from being realized.

Not with humans. Humans are the harder part, but that’s just …We know exactly how real world physics of lighting work, and so that’s just a matter of brute force computing power. Give us enough computing power, and we can do that. We could do that today with algorithms that we know. Humans are a much harder problem, because rendering faces and skin is hard enough, but you quickly realize that the challenge with rendering humans is having realistic human motion to display. Having dynamic human responses in games are reactive to what you’re doing, and aren’t just pre-baked. As you’re interacting with a real human, their eyes are constantly moving with you, the eye contact is super important. You’re picking up the emotions on their faces, and they’re dynamically responding to you. If you just used a perfect motion captured human, a flawless motion capture with future technology, it would still be uncanny and not feel like it’s a real human interacting with you. To do completely photo-realistic rendering of everything, you have to simulate realistic humans and actually simulate human intelligence, emotion, and thinking. It’s not a matter of computing power. If you gave us an infinitely fast computer, we still don’t have the algorithm. We have no real clue how the brain works at the higher levels. You might understand how one neuron interacts with other adjoining neurons, but the large scale structure of it is still a complete mystery. That could be unpredictably far away. Once we are able to simulate human intelligence, what’s going to separate humans from people? You’re talking singularity level stuff at that point, but I do think that we’re many decades away from having that ability.

Hopefully by that time we’ll still be around reporting on what would be a historical technological breakthrough.