During last year's EA Play event, EA announced a new division called SEED (Search for Extraordinary Experiences Division) whose primary goal was to advance technology in fields like AI, deep learning, machine learning, AR, VR and more.
Now we have a practical example of what they've been working on. A team led by Technical director Magnus Nordin worked on a self-learning AI agent capable of playing a complex game like Battlefield 1 - and succeeded.
How long did the self-learning agent train?
You can’t play Battlefield by pressing a single button at a time. Rather it requires players to perform an array of simultaneous actions. So to help the self-learning agent get a head start with basic action combinations, we let it observe 30 minutes of human play—a process called imitation learning—before letting it train on its own.
The agents that we show in our demo have subsequently practiced for six days against versions of itself and some simple old-fashioned bots, playing on several machines in parallel. In total that equates to roughly 300 days of total gameplay experience. They’re constantly improving but not particularly fast learners.
The agent has the same field-of-view as a human player and is assisted by a mini-map. We quickly discovered, however, that Battlefield is too visually complex for the agent to understand, which meant we had to simplify what it sees.
We’ve seen cases of self-learning agents that have taught themselves how to play old arcade games, as well as the original Doom and Go. What makes your work stand out from these examples?
As far as I know, this is the first implementation of deep reinforcement learning in an immersive and complex first-person AAA game. Besides, it’s running in Battlefield, a game with famously elaborate game mechanics.
The result isn't perfect yet, as admitted by Nordin himself. The AI agents aren't capable of thinking ahead, which means that they don't quite know what to do when an objective isn't in their sight, but Nordin reckons that will improve with time.
This doesn't mean bots are coming to Battlefield 1. SEED's objectives are different - for starters, this project aims to help with Quality Assurance by providing more crash reports for example.
Eventually, as the technology is refined the expectation is that these self-learning AI agents could become intelligent NPCs in games capable of engaging directly with human players in a number of ways. You can find more about the technical side of this project in this blog post.
At GDC 2018, SEED also showcased a real-time raytracing experiment called Project PICA PICA and based on Microsoft's new DXR API. Here, raytracing was used for ambient occlusion, transparency, translucency, soft shadows, subsurface scattering, dynamic global illumination and reflections.
You may check the demo in action below.