Motion data processing

Dimitra Blana
The quest for a life-like prosthetic hand
5 min readMay 18, 2018

A few weeks ago we brought volunteers to our lab and recorded their hand movements. Of course the actual data capture is only the first step. There is a lot of work to do before we can answer the questions we set out to answer with our experiment. And as promised, I will take you through each step!

It’s going to be fun.

All the processing steps, in pretty colours. Colours are essential for the success of the experiment.

In this post, I’ll explain how we process the motion data (the red box in the picture above). To record the movement of the hand, we used small spherical markers covered in retro-reflective material, a total of fifteen: five on the palm, four on the thumb, three on the index finger and three on the middle finger. We didn’t put any markers on the ring or little finger, because in the movements we are looking at, the middle, ring and little finger are always opening and closing together.

When the positions of these markers are reconstructed in three dimensions in our software, we end up with something like this.

What could this be? A pair of scissors? A sunbathing rabbit? A gazelle hitching a ride on a crocodile?

The software can see the markers, but doesn’t know which is which. So the first thing to do is label the markers, which is quite easy, if you know that you are looking at a hand with three digits…

More colours! The palm is yellow, the thumb is red, the index finger is blue and the middle finger is purple.

In this particular posture, the fingers are fully open, so all the markers are clearly visible. Unfortunately, we are not always this lucky. For example, when the hand closes to a fist, the index fingertip marker might be hidden behind the thumb, so the cameras of our motion captures system cannot see it.

The following graph shows the coordinates in three dimensions of one of the markers, recorded during a movement.

The vertical pink bars show the times when the marker was not visible by the cameras. When this happens for a very short amount of time, we can fill the gap ourselves: we know where the marker was before, and where it was after, so we can estimate where it must have been when the cameras lost sight of it.

This is better, but as you can see we cannot fill all the gaps. If a marker is missing for a longer period of time, we can’t be sure of its location, and it is better to leave a gap in the data than introducing data that are wrong.

The next thing to do is filtering. The purpose of filtering is to remove noise from our data. And no, by “noise” I don’t mean the protestations of the baby next door. In signal processing, noise is any unwanted modification to the data we are recording.

A major source of noise in motion data is the so-called “skin movement artifact”. When we record motion data we are interested in the movement of the bones, but we cannot stick our markers directly on them. Instead, we stick them on the skin. Unfortunately, skin is generally quite loose and doesn’t move in exactly the same way as the underlying bone.

During movements, the skin can wobble, which makes the markers wobble, and this is easy to see in our data. Have a look at the graph below.

Here I am showing only one of the coordinates of a marker. The line is very jerky, because of the wobble of the skin. This clearly does not describe the movement of the bone, as its frequency is too high, i.e. it is changing much faster than a finger could move.

By filtering out the high frequency noise, we have a smoother line.

Labelling the markers, gap filling and filtering are the main steps in the processing of motion data. For our experiment, we have to add one more step. From the locations of the markers, we want to estimate the angles of the joints, because this is what our computer model outputs.

We assume that when the hand is open and all the digits are straight, the angles are zero. As the fingers close, the angles increase. For example, the angle of the knuckles at the base of the fingers is around 90 degrees when the hand is closed.

So how do we go from marker locations to angles? We do this with a bit of vector algebra.

First, we estimate a flat surface from the five markers of the palm. We then calculate vectors (which are lines with a direction, like arrows) from pairs of markers at each finger segment. For example, the first and second marker of the index finger give us a vector representing the proximal phalanx of that finger. Finally, we use dot products of vectors to find the angles between them.

Angle of the index finger proximal interphalangeal joint. (Other hand joint names: carpometacarpal, metacarpophalangeal. Go on, say that three times fast.)

So this is how we get from grey spheres floating in space to hand joint angles! Next, I’ll go over the steps needed to process the electromyography signals. Then I’ll explain how our computer model uses these processed EMG signals to “move” its fingers, and finally how we compare the movement of the model fingers with the movement of the actual fingers.

Grab a snack, we have a long way to go.

--

--

Dimitra Blana
The quest for a life-like prosthetic hand

I am a biomedical engineer, and I develop computer models to help understand and treat movement impairment. I am Greek, living in the UK.