This AI can do break-dancing, karate and more!

Overview of the paper “A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters” by J Won et al.

Chintan Trivedi
deepgamingai

--

One of my favorite genre of games is that where we try to accurately replicate the real world in the game simulation. If you haven’t played the latest Spider-Man game on PS4 yet, you need to take a look at it because it does a marvelous job of recreating the entire island of New York City, and it is one of the most accurate recreations I have seen till date. Not just the buildings and the roads, but even the crowd simulation gives you the true feeling of NYC.

Left: Comparisons between Spider-Man PS4 game vs real pictures of NYC. Right: Crowd simulation in the game including diverse behavioral characteristics like street performers, bystanders, dancers, etc.

Anyways, that is slightly besides the point of the paper that we are going to take a look at today. I want to cover this recent work from Facebook AI Research which provides a scalable method to use neural network based motion models for simulating crowds in games such as Spider-Man. The paper is titled “A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters” by J Won et. al.

This is important because if you want to create a massive open world game for a place like NYC where there are thousands of people, including street artists and dancers, entertainers, etc. you need to create many motion models for each character and simulate them in real-time. It is highly cumbersome to train and run separate motion models for animating each individual character in the game in real-time.

Diverse motions generated by same model for large crowd simulation. [source]

So, this paper provides an elegant solution to combine multiple motion models into a single general controller model which can reproduce any motion that you desire. As you can see above, the same model is able to produce diverse behavior for different clusters of motions. Here you can see a cluster of break-dancing motions, another cluster of people dancing, people doing karate, etc.

Each individual cluster is a specific category of motion produced by an expert neural network. So, from a motion capture database of different motion categories, they have first clustered them together according to the type of motion. Individual motion controller experts are trained on each type, and are then combined together at the end under a single controller model.

Slight variation in repeatedly generated motion for inducing realism. [source]

As shown above, the individual motions are also slightly different each time you generate them, which makes the animations seem less repetitive and more realistic. I hope you can now imagine how much easier it would be to replace current animation techniques with neural network based motion models in large games. I encourage you guys to go check out more amazing results from the authors’ video.

Thank you for reading. If you liked this article, you may follow more of my work on Medium, GitHub, or subscribe to my YouTube channel.

--

--

Chintan Trivedi
deepgamingai

AI, ML for Digital Games Researcher. Founder at DG AI Research Lab, India. Visit our publication homepage medium.com/deepgamingai for weekly AI & Games content!