Artificial Intelligence is learning to walk

Łukasz Kidziński
3 min readMar 31, 2017

--

Movement starts in the brain. Most of us learned to coordinate signals sent from the brain to the muscles when we were between 9 and 18 months old. Since then, we take walking for granted.

Your first trials were not much better.

At Stanford we try to understand human motor control by building computational models. Given the measurement of human movement collected with motion capture systems we estimate which muscles were activated at any point in time. However, we’ve just started experimenting with an inverse approach: What if we tried to learn muscles activations from scratch? Can an artificial neural network learn to control human body?

Clearly, we can’t make it control a real human body. Instead, we used a musuculoskeletal simulator OpenSim and we created an artificial environment in which we want to synthesize walking with an artificial intelligence algorithm. To take advantage of AI experts from all over the world, we created a challenge called “Learning how to walk” on the crowdAI website.

A confused walker (Solution submitted by VictorM) -

The challenge

The task is to control the model such that it moves as far as possible in 5 seconds. Participants control 18 muscles of a muscluoskeletal model by sending activations, i.e. variables on the [0,1] interval, where 0 means no activation and 1 is full activation. The algorithm deciding which muscles to activate can use observational data including positions, velocities and accelerations of feet, pelvis, joints and the center of mass. Decisions need to be taken at 100Hz frequency.

A very stable walker (Solution submitted by yieldthought)

Results

So far over 130 people tried to approach the problem and we received over 50 submissions. Some of the results are presented in the videos above and below. It’s clearly not a mature adult walk yet, but it’s definitely encouraging. (btw. the music was suggested by Youtube recommender and I love how it fits the action :)

Can I try?

Definitely! It’s actually surprisingly simple to build a basic model. You just need to install Anaconda and our environment

conda create -n opensim-rl -c kidzik opensim git
source activate opensim-rl
pip install git+https://github.com/kidzik/osim-rl.git

And you are ready to go. You can test the environment by running the following code in python

from osim.env import GaitEnv

env = GaitEnv(visualize=True)
for i in range(500):
env.step(env.action_space.sample())

To train a basic artificial network, you can just install keras-rl and execute this script. Using a so-called Deep Deterministic Policy Gradient algorithm, it will train a neural network to control a human body! You can find details regarding installation here.

Jumper (Solution submitted by spMohanty)

Why would I care?

The set of techniques used in our challenge is called reinforcement learning. These methods are often used in games (AlphaGO, Atari) and robotics. Why not to apply them in health care?

Imagine an assistive device which sends signals to help patients who drag their foot (common in patients with strokes or multiple sclerosis). This actually already exists. Now imagine an assistive device for the entire gait. To make it possible we need to understand which muscles to stimulate. By solving our challenge and tackling similar problems we are getting closer to this goal.

Wait a minute… AI controlling human body? Are we doomed?

No, not really.

--

--