After Mastering Go and StarCraft, DeepMind Takes on Soccer

Synced
SyncedReview
Published in
3 min readFeb 26, 2019

Having notched impressive victories over human professionals in Go, Atari Games, and most recently StarCraft 2 — Google’s DeepMind team has now turned its formidable research efforts to soccer. In a paper released last week, the UK AI company demonstrates a novel machine learning method that trains a team of AI agents to play a simulated version of “the beautiful game.”

Gaming, AI and soccer fans hailed DeepMind’s latest innovation on social media, with comments like “You should partner with EA Sports for a FIFA environment!”

Why Robot Soccer?

Machine learning, and particularly deep reinforcement learning, has in recent years achieved remarkable success across a wide range of competitive games. Collaborative-multi-agent games however remained a relatively difficult research domain. A significant milestone was reached last year, when the “OpenAI 5” bot team held its own against human pros in the highly complex and wildly popular multiplayer video game Dota 2.

Robot soccer meanwhile is a typical collaborative-multi-agent-game challenge, and researchers and engineers test their AI-programmed physical robots each year in the RoboCup. One of the event’s sub-tracks is the RoboCup Federation’s 3D Soccer Simulation League, where software-controlled robots compete in simulated soccer games.

Although machine learning has been widely leveraged in robot soccer simulations, deep reinforcement learning has not. That’s why deep reinforcement learning pioneers DeepMind were determined to give it a shot.

How AI Learns to Cooperate

To tackle the multi-agent robot soccer problem, DeepMind researchers combined Stochastic Value Gradients or SVG0, a reinforcement learning algorithm for continuous control; and Population-based training, a method to optimize hyper-parameters in a population of simultaneously learning agents.

Researchers first simulated a 2v2 soccer game using the MuJoCo physics engine. The game featured four players with simplified humanoid forms operating together in a 3-dimensional action space.

Ten different simulated robot soccer teams were generated, each trained with 25 billion frames of learning experience. Researchers then simulated one million tournament matches between the ten squads.

As it experienced sufficient learning experience, DeepMind’s soccer AIs gradually nurtured a set of cooperative behaviors. Researchers observed for example that a soccer AI trained at five billion steps would only dribble the ball by itself and ignored its teammate’s position; but when the training reached 80 billion steps the AI became less selfish and began performing humanlike one-two passes to improve its chances of scoring.

Get A Hand on the Game

DeepMind also released the MuJoCo Soccer environment as an open-source research platform for simulated, competitive-cooperative multi-agent interactions.

To play simulated soccer games, first install MuJoCo Pro 2.00 and then the dm_controlPython package to import soccer as dm_soccer. Settings such as team size and time limit can be adjusted. To visualize an example 2-vs-2 soccer environment in the dm_controlinteractive viewer, execute dm_control/locomotion/soccer/explore.py.

The paper Emergent Coordination Through Competition has been accepted as an ICLR 2019 conference paper and is available on arXiv. The project GitHub is here.

Journalist: Tony Peng | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global