Experiment results

Dimitra Blana
The quest for a life-like prosthetic hand
3 min readDec 13, 2018

Where were we? Oh yes. We want to control a prosthetic hand with our computer model. But first, we want to test whether it behaves the same way as an actual hand, given the same muscle activity. If you recall, I fretted about the fact that in practice we can’t give exactly the same muscle activity to the computer model as the actual hand, because we simply can’t record this activity very easily.

During our experiment, we focused on just four muscle groups on the forearm (extensor of the thumb, extensor of the index finger, extensor of the other three fingers, and flexor of all fingers). With such a limited number of muscles, we chose to limit the number of hand postures to four: open hand, point with index finger, “thumbs up” (all fingers closed and thumb extended), and “L” shape (thumb and index extended, the rest of the fingers closed). We also had a fifth, “rest” posture for the hand to return to after every other posture: loosely closed hand.

We used motion capture markers on our participants’ hands to record what their actual hand did, and electromyography (EMG) sensors on their forearms to record the activity of their muscles as they moved. We showed them a video that demonstrated a sequence of postures (randomised by a computer), and asked them to follow along with their own hand. Then we ran our model simulations with the EMG signals as inputs, and compared the model postures with the actual hand postures.

This table contains our results.

The number of postures achieved by the model, compared to the postures achieved by the participants’ hands. The bottom row shows the number of postures of each type shown in the demonstration video. For example, there were 10 “open” postures shown in the demonstration video. The model identified all 10 “open” postures in every subject except for S3, for whom it only identified six.

We measure success using the “success rate”, which is the ratio of the number of postures correctly identified by the model and the total number of postures. So for our first subject (a.k.a. S1), from the 30 postures shown in the demonstration video, the model matched the actual hand posture in 26. It missed two of the “thumbs up” postures, and two of the “L” postures.

The keen-eyed will spot that the total number of postures is not always 30 in the calculation of the success rate! Why is that?

The bottom row contains the number of each type of posture shown in the demonstration video. For example, the video contained 8 pointing postures. But for some subjects (S2, S4 and S5), not all 8 postures were identified as “pointing” based on the actual hand data. That could mean that the subject simply missed that posture, or they didn’t extend their index finger enough, so the posture could not be clearly identified as “pointing”. To make it fair to the model, we didn’t use those postures in the calculation of the success rate.

The results are …mixed. The model did well for some subjects like S2, and quite badly for others, like S3. Here’s a clue as to why: S2’s forearm is much larger than S3’s. Why does this matter? When the forearm is quite small, it is difficult to place four EMG sensors far enough apart to record four separate signals from the muscles underneath the skin. We get “cross-talk”, which means that, for example, the activity of the thumb extensor muscle gets picked up by the EMG sensor that is meant to be recording only the activity of the index extensor muscle. With confusing inputs, it is not surprising that the model cannot perform the correct movements.

Were you expecting a brilliantly successful result? I compared our journey to an American TV show, and as you know, the plot resolution is rarely as satisfying as we’d like it to be (*cough* Lost *cough*). Our study confirmed that using musculoskeletal modelling to control a hand prosthesis has promise, but it relies on good inputs. I expect that in a few years, we won’t have to use surface EMG recordings: we will have access to many more muscle activity signals, perhaps reading directly from nerves!

Basically what I’m saying is, we’ve built the model, now we just have to wait for the nerve interface technology to catch up 😎

So is that it, are we done? No! This is just the end of chapter one. (I know, I’m mixing my metaphors.) Next time: I’ll introduce you to our new and exciting project, plus I have some personal news to share.

--

--

Dimitra Blana
The quest for a life-like prosthetic hand

I am a biomedical engineer, and I develop computer models to help understand and treat movement impairment. I am Greek, living in the UK.