Embark Studios
Published in

Embark Studios

What research means at Embark Studios

Automating game content: From Pavlov’s dogs to personalities and predictive context.

Teaching Burgerman how to walk on his own is part of the research happening here at Embark.

At Embark, we’re researching ways to allow game developers to tap into their true creativity from training machines with rewards to making game characters come alive (don’t worry) to giving creators predictive context tools. In the context of our budding creative platform, this is the sort of technology that will turn anyone into a game maker.

I’m Magnus, one of the founders and Chief Exploration Officer here at Embark. At our studio, we apply research to game industry problems on our quest to modernize game development and discover new experiences.

Before getting into the weeds here, let’s start with an overview of what applying research means in the context of Embark: in short, my team and I are responsible for tracking the latest research within areas that are relevant to us—like reinforcement learning, natural language understanding, computer vision, and generative models — prioritizing and filtering out that which may contain methods and solutions applicable to our problems. Every week we have reading sessions, where we review and analyze interesting research in more detail.

If we agree that a method in a paper looks relevant — and more importantly, that it seems feasible to implement in a system we can put into production — we proceed with a pre-study for a few weeks. If the results of the pre-study are positive, we assign a squad of researchers and software engineers to build a prototype. If the prototype is satisfactory, we finally build a robust and efficient version of the prototype system for deployment in production.

It’s not uncommon that we improve or extend methods described in published research, or adapt them to our specific needs. If the originality of our own research is high enough, our aim is to publish these results in the open, as a contribution to the research community on which we depend for our work.

If you’ve followed what we do here at Embark, you may be familiar with some of the areas exploring. We’ve previously discussed our work on physics-based animation with machine learning, specifically reinforcement learning. In a nutshell, imagine something like Pavlov’s dogs — we train machines to walk by giving them rewards, or virtual dog treats, for doing the right things, instead of having to script behaviors and animate movement manually.

That’s not all, however. We’re continuing with this work, while also taking the two more steps into unchartered territory:

  1. giving these characters “life”, giving them agency and personality; and,
  2. building tools that understand the context of the game environment and assist creators create.

Most of this work is aimed at the creative platform we’re working on — a project focused fully on user-created content, and enabling players to build their own worlds and interactive experiences. Naturally, any lessons learned are applied across the studio.

In this house, we obey the laws of physics

While we push the boundaries of game development, we want to stay grounded in reality as we create our worlds. That means we rely on physics as much as possible. That comes with several benefits: a good physics simulation reduces the burden of programming each and every interaction and event in a game. Rather than scripting interactions manually, you simply let the physics engine automatically take care of interactions with the game environment.

This means developers (and the users of our tools) can spend more time on the creativity of making games. Beyond that, sticking to physics allows for characters and objects to move and interact with the environment in the world more naturally.

However, animation in a physics-based world isn’t easy; in fact, it’s really hard. Classic animation methods work poorly in a physics simulation. Our work with reinforcement learning to engineer animation, on the other hand, works well in a physics simulation.

Using this method, you teach a neural network how to control the character and make it move — in essence, automating the animation of the character or creature. We specify bones, joints, mass, muscles, or motors, and we leave it to the training algorithm to come up with a “brain” that enables the creature to walk in different terrains, or simply to stand still — don’t underestimate the difficulty of getting creatures to balance themselves in a physics simulation.

We’ve continued experimenting with this method, using it on various different anatomies.

In some cases, surprisingly natural gaits emerge; in other cases, the output is less natural. An overly “cartoony” character would probably have a hard time walking in reality. As such, it struggles to learn to walk in a physics simulation too. Right now, we are working on animating other tasks than locomotion, increasing the speed of training the “brains”, and more.

Making characters come alive

Moving from the physical to the personal realm in our worlds, we’re also experimenting with how to make characters come alive without players having to script actions and reactions. We’re using language models (big neural networks specialized in language understanding) as the core of character conversation and their personality. These models help characters understand their environments and the events that happen around them.

To achieve success here, we need to address the notion of grounded language understanding. This means characters should, as much as possible, be aware of their surroundings and be able to reason about nearby objects, events, and other characters. Grounded language understanding is an active and growing research topic, and we’ll have more to say on this topic in a future post.

Our seal shows an early example of understanding something about his world.

It’s all about context

In our quest to remove barriers to creativity in games, we have also started to research “intelligent tools” — that is, tools that place the context of a game’s world at your fingertips, as you make creative decisions (e.g. about what to build).

We envision a tool that “knows” all of the game’s assets in a content library. If a creator wants to replace an asset in a world they have built, they can use this tool to see what assets exist for a possible replacement. Perhaps more importantly, the creator can quickly see what assets are similar to what they want to replace. Here, there are two types of similarities that the tool can draw from—either separately or at the same time:

  1. semantic similarity, e.g. a bottle is similar to a bucket as they’re both containers; and
  2. visual similarity, e.g. a dead tree can be similar to a lamp post in appearance, even if their purposes and functions, i.e. their semantics, are completely different.
Using one of our intelligent tools to do some refurbishing.

Sharing is caring

We will revisit these projects in future blog posts. We hope to open source some of the systems we have created while doing this research, to spread knowledge and hopefully engage a community of peers to extend and improve the capabilities of these systems.

We’re convinced we have only scratched the surface, that there are many more applications of machine learning that will help create new game experiences for future players. If you agree and believe there is an exciting future for the combination of machine learning and game development, don’t hesitate to look at our job openings. There might be a role for you in our team!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Magnus Nordin

Magnus Nordin

Chief Exploration Officer of Embark Studios