How exoskeletons can sense your motor commands (Part 2 of 3 on Exos)

Keenon Werling
7 min readAug 24, 2022

--

This is Part 2 of a multipart series on exoskeleton research. If you haven’t read Part 1, I’d recommend going back and reading that first (it’s short, I promise!).

In Part 1, I introduced what I think are the two big problems in exoskeleton research:

  • “How does an exoskeleton know what you want to do?”
  • If it knows what you want to do, how can the exoskeleton help, without making a “3-legged race”?

This post is going to talk about the first question. For the second, see Post 3.

[ If you’re a computer science researcher, stay tuned until the end for a cool new competition and workshop we’re setting up! You can use your skills to help do a lot of good, and help restore mobility for disabled people. ]

How exoskeletons sense your motor commands:

Your intentions to move your body, or “motor commands,” cascade through your body, and there are lots of places where we can measure them. Commands start out in the brain, lighting up the motor cortex. Those signals are then encoded as nerve pulses and sent out to motor neurons. The motor neurons trigger the muscles to physically contract, which produces physical force on the skeleton, and that force causes your body to accelerate. That acceleration, integrated over time, produces your desired motion.

Different approaches to measuring your motor commands tap into this cascade at different points along the cycle. I summarize all the different ways people have already tried to sense motor commands in the Appendix at the bottom of the post. Sensing motor commands in the brain, in the motor neurons, or at the motion level (i.e., predicting current motor commands from past motion — “gait cycle assistance”) all have pretty serious challenges that I think will take many years to solve. Though it won’t be easy, I’m convinced that sensing motor commands by sensing (in real time) the forces your muscles are generating at your joints provides the best opportunities for a near-term win.

How improve motor command sensing: IMUs and Inverse Dynamics

Before I explain how, I need to briefly introduce the concept of “inverse dynamics”. The idea is simple, but profound: to measure forces, all you actually need to do is measure accelerations (and know the mass). Because of Newton’s 2nd law (that force = mass * acceleration), if you know mass and measure acceleration, you can compute the force. Devices to measure acceleration are ~100x cheaper and significantly more durable than comparable devices to measure force. If you’re curious about how to run inverse dynamics on complex skeletons with multiple body segments connected by joints, you can read about the math in the docs of my physics engine, Nimble, or in a robotics textbook, or you can just trust me that this is possible 😛 Measuring masses of each segment in a living human body is also not trivial, but it can be done (I’ll have to write a follow up post about this).

People wearing Xsens commercial IMU motion capture suits, striking dramatic poses

Measuring accelerations on the human body using portable inertial-measurement-units (or “IMUs”) is inexpensive and open-source implementations already exist (which require ~$100 worth of parts to build). There are also commercial options too, if you don’t want to do any soldering.

So, we know we can strap ~$100 worth of sensors to the body and measure accelerations. A cheap and common IMU chip like an LSM6 series has a 6kHz sample rate (and others go even higher), so with careful programming of our central processor we could get up to 6000 readings per second about body accelerations. In theory, we could then use some fancy math to reconstruct all the joint torques in the body 6000 times per second, too. That would be a breakthrough in pilot motor command sensing!

You’re probably feeling suspicious. “It can’t be that easy, or else someone would have done it already and everyone would already be wearing exoskeletons.” You’re right, it turns out there is a tiny but very significant mathematical gremlin waiting to sabotage this beautiful idea. That gremlin goes by the name of “ground reaction force”, or “GRF,” and has been my nemesis for the past year. Let me introduce you.

The missing key — Ground Reaction Forces (“GRFs”):

3D visualization of sprinting data with ground reaction forces

It turns out that inverse dynamics has a catch. In order to compute the joint torques (which are pilot motor commands), we need to know the forces between the feet and the ground (“GRFs”). Accurately measuring the forces between the feet and the ground is currently only possible with big expensive lab-based force plate setups. A motion capture recording done in a lab (equipped with an expensive force plate setup) is shown above, where we render the GRF as a red line, extending from the center of pressure on the foot upwards in the direction of force.

An example of a force-plate instrumented treadmill — not exactly portable!

The force plates used to record GRFs are too large and heavy and fragile to bring with us on an exoskeleton, and attempts to make GRF sensors portable have thus far produced inaccurate and flimsy devices that won’t work for our purposes. It turns out building a sensitive force sensor that can survive being stepped on hundreds of thousands of times without degrading is a very challenging problem.

Rather than give up, could we just simulate the feet, and infer the GRF data from our simulations? After all, engineers use realistic physics simulations all the time in order to model how objects will interact in the real world. Human feet are so central to life and medicine (and even video games), it seems like we should have good computer models of them, right?

Surprisingly, modern science still doesn’t have models of feet that can accurately predict GRF values in novel situations. Even more surprisingly, there isn’t even a big public dataset of motion and foot-ground contact that we could validate models against, so many papers in this direction have to collect their own small datasets (or simply don’t validate their model at all). But, if we had accurate GRF models (validated against real data), we could build amazing intention sensing devices and revolutionize exoskeleton control. So let’s see if we can make a dent in that problem:

GRF Data and Models to the Rescue!

I’m actively collecting a big dataset of motion and ground-reaction-forces. I’ve put up a tool, www.addbiomechanics.org, that helps biomechanics people automatically process their motion capture data (see the paper for more details — I’ll write a post about it soon), in exchange for sharing it with the world. There’s already more than 100 users from motion labs all over the world: Harvard, MIT, UMass, Stanford, Edinburgh, Michigan State, etc, etc. Collectively, they’ve uploaded more than 5 gigabytes of data.

I’m also setting up a public competition (along with workshops at several conferences in machine learning and computer graphics) to see who can produce the most accurate GRF estimates from motion. I have no idea what kind of model will end up winning: neural networks, finite-element models, classic Hunt-Crossley models, or something totally new and different. Designing a model that produces accurate GRF values is a problem of much wider interest in research than just exoskeleton controllers (with applications in computer graphics, simulation, and biomechanics — I’ll have to do another post about how surprisingly central GRFs are), so solving GRF prediction will have impacts way beyond exoskeleton control.

Achieving the Exoskeleton Dream:

Once we’ve (collectively) solved the GRF problem, it’s just an engineering effort to put it together with an IMU mocap system like this one to build the complete “software+IMUs” approach to sensing pilot motor commands. In theory, we should be able to sense the torques on every joint in the body thousands of times per second with almost no noise or latency for only a few hundred dollars worth of parts. If we prove it can be done, that’ll change the possibilities for exoskeleton controllers, because we’ll have a much richer knowledge of what the pilot is trying to do in any given millisecond.

If you’re already convinced, I could use your help! If you know how to code and are interested in helping out, leave your email so we can let you know as soon as we’ve posted the GRF modeling competition! If you’re a biomechanics person with access to a motion capture lab, use AddBiomechanics.org to automate your scaling and marker registration for OpenSim! As a by-product, you’ll be donating data to research projects including the GRF competition. You can also read Part 3, where I’ll talk about different kinds of exoskeleton controllers we can build with that kind of “high fidelity motor command sensing” capability, once we have it.

Read Part 3!

If you’re still thinking “but what about [brain interfaces / EMG / force sensors / gait cycles / etc]?” then read the Appendix!

--

--