Appendix to “How exoskeletons can sense your motor commands (Part 2 of 3 on Exos)”

Keenon Werling
7 min readAug 24, 2022

--

This is the appendix to Part 2 of my series on exoskeleton research.

There are lots of creative ideas for how to sense motor commands, other than what I presented in Part 2, and people have developed working devices at every level of the “intention cascade.”

The “intention cascade.” See the main Part 2 post for more details.

Sensing Motor Commands at the Motor Cortex level:

While it is possible to measure electrical activity on the skull (electroencephalogram — EEG), those signals are so noisy as to be completely useless. Practical methods to directly measure the motor cortex generally need to get electrodes through the skull, and directly touch the brain. This means brain surgery, with the inherent risk of any brain surgery, and also added risks of infection due to the persistent holes in the skull that need to be left in order to run cables. That means this work is done on monkeys, and people with complete paralysis. Leaving that aside, though, these devices still have a bandwidth problem. It was a big deal when folks at Stanford managed to let someone type at 90 characters per minute using a brain interface back in 2021. That’s only one letter every 1.5 seconds. To control an exoskeleton, we need to know what every joint in the body is doing, ideally to several decimal points of precision, and know every millisecond. So brain interfaces are currently at least 10,000x too slow, not to mention dangerous and expensive.

Sensing Motor Commands at the Motor Neurons level:

So let’s follow the signal out of the brain, and down towards the motor neurons. When a motor neuron excites a muscle, it generates an electrical signature. These signals can be picked up with electromyography (EMG) sensors. These sensors can be fairly inexpensive, and placed non-invasively on the skin. However, there are a lot of things happening in the body that generate electrical signatures besides the muscle you’re interested in, and separating out the muscle activation from the other sources is an unsolved problem. Neighboring muscles, the heartbeat, sweat on your skin, and even radio waves can all confuse your sensor. Generally, people get a useful signal by measuring very large muscles (which generate a lot of electrical activity), and then processing the signal (mostly smoothing) to take out the randomness. The first problem is that this smoothing also slows down the sensor — it can take more than a second to decide that you’ve actually flexed your muscle, and not that there was suddenly just an increase in volume in a song on a radio station broadcasting at a wavelength similar to your height. Another problem is that smaller muscles are very difficult to detect. One way to reduce noise, and to detect smaller muscles, is to invasively insert EMG sensors (using a needle) directly into the muscle of interest. I’m not thrilled about the idea of having to insert a dozen needles into my body every time I put on my exoskeleton, so this wouldn’t be my preference of sensor type, but maybe this could work. Even if we did get clean signals about what each muscle is doing, though, there’s one further problem: it’s not always obvious how muscles correspond to motors in an exoskeleton. Muscles may cross multiple joints at once, and contracting that muscle may produce different levels of torque at each joint depending on factors like the angle and velocity of that joint. It can be non-trivial to map muscle excitations to motor commands. I believe this is also a solvable problem, but it’s worth calling out.

Sensing Motor Commands at the Torques level:

Once the motor neurons have fired the muscles, and the muscles have begun to contract, we can try to measure the forces that the muscles are applying to the body. Measuring forces that are happening inside the body (between muscles and bones) requires surgically implanting force sensors into your skeleton. While people have done this for measuring knee contact forces, it’s a big lift to require joint replacement surgery before I can use an exoskeleton. So we’ll need to content ourselves with inferring the body’s internal forces (the muscle-bone forces we are interested in) by looking at how it interacts with the environment. The most popular way to do this is to add force sensors to exoskeletons (pictured above), so that they can measure the force between the person and the exoskeleton. These are non-invasive, accurate, and can be read thousands of times per second — all wonderful traits for sensors. The challenges with force sensors are that (a) they can only measure forces acting on the exoskeleton, which means any body parts the exoskeleton doesn’t touch don’t get measured, and (b) they’re surprisingly expensive and require individual calibration, so covering the exoskeleton with arrays of hundreds of them to measure every conceivable force is not practical. Despite current drawbacks, I think this “torque level” is the best place to sense human motor commands, and the main post discusses how I think we can do it.

Sensing Motor Commands at the Motion level:

Now, we arrive at the simplest, cheapest, and most popular method for sensing someone’s motor commands: looking at the motion that resulted from their past motor commands, and assuming that the future will be similar. The most common way to do this is “gait cycle” assistance: tell someone to walk at a constant speed on a treadmill, put switches on their feet to detect when their foot contacts the ground, and then just time the delay between foot strikes. You can guess pretty well what someone’s muscles are trying to do, given what percentage of the way they are between foot strikes, and then apply assistance accordingly. The benefit of this approach is that it’s simple, and it works: get two foot switches and a timer, and you can get pretty decent exoskeleton assistance. The only problem with this approach is that it’s inflexible. Because you only get to see motion after it has already happened, the only way to use “motion observation” to sense motor commands in real time is for the motion to repeat itself in a predictable pattern. If the exoskeleton pilot tries to do something that isn’t cyclic, it stops working.

Other strategies to get better real-time sensing of motor commands:

At the end of the day, I want my exoskeleton to instantly respond to my motor commands, whether I’m walking on a treadmill or not. Sensing at the Motor Cortex is out, for obvious reasons — poor monkeys. So that leaves us with three options:

  1. Sensing at the Motor Neurons level
  2. Somehow generalizing “Motion level” sensing to non-repeating motions
  3. Sensing at the Torques level

Personally, I suspect the ultimate solution will be a blend of the three, but the near term opportunity is in torque level sensing, and you can scroll up to see why. Let’s break down the opportunities and challenges in the other two, though, cause those are also exciting:

Opportunities to improve Motor Neuron level sensing:

“Automated semi-real-time detection of muscle activity with ultrasound imaging”, Sosnowska et al 2021

Going after better motor neuron sensing, we’ll likely need to mess around with needles, which I don’t like. There are promising non-invasive ideas around using ultrasound to sense muscle activations, which I’m very hopeful about, but it’s still early days. For the foreseeable future, intercepting signals at the motor neuron level is either going to be very noisy, or very invasive. But the situation is evolving, and will certainly improve!

Opportunities to improve Motion level sensing:

“Local motion phases for learning multi-contact character movements”, Starke et al 2020, Video

If we try to extend sensing at the motion level to non-repeating motions, we’ll need some way to look at arbitrary past motion and predict current motor commands, even if that motion isn’t repeating. That feels like it just might be possible, with a lot of data and fancy neural networks. There’s a ton of work in computer graphics around building “motion models” to predict likely next motions, mostly used to move video game characters around in realistic ways in games. These models can run at real-time framerates, so that’s promising. My intuition, having played with this tech a bit, is that relying solely on these predictions for inferring pilot motor commands for exoskeleton control will still end up feeling fairly “3 legged race” ish, because the model needs to see several frames of a new direction of motion before it can understand what you’re doing and adapt. That means at the beginning of your motion, the exoskeleton will still be trying to assist whatever motion you were doing before, and may fight you. There is also a distinct risk that the model thinks that the next thing you want to do is some kind of athletic move that’s common in the training set (like jumping in the air), and it ends up applying unwanted “assistance” that causes you to fall violently. That being said, I think motion prediction technologies are already at a point where they’re also a good extra source of signal to reduce noise for a control system — I just wouldn’t rely on them by themselves.

--

--