Photo by Joshua Wilson at unsplash.com

Behind the screens: towards balance detection

Dylan Opdam
Orikami blog

--

We might not realize it, but we heavily rely on our balance. Only when our balance is compromised this dependency becomes clear. People with compromised balance control are more likely to fall and risk injuries as a consequence of falling. People with balance disorders often rely on others to help with daily tasks that they can no longer do by themselves. A task as simple as putting away the groceries might already be problematic for someone with balance disorders.

Balance issues can be caused by several disorders. Some disorders are sudden and short term, like an infection of the vestibular system or the nerves surrounding it. Other disorders are chronic, for example Parkinson’s disease and Multiple sclerosis. Measuring the symptoms of these diseases allows us to infer disease progression and the effectiveness of medication.

I came across this subject for the first time when I did my master thesis on the automatic detection of freezing of gait in Parkinson patients. After my thesis I continued with a similar subject in helping to investigate the possibility of measuring balance. Through this blog I would like to share some first steps taken towards balance detection. In particular estimating step parameters from sensor data.

Measuring balance in a home setting

The golden standard used in gait analysis is obtained by a video tracking system, manual video annotation or pressure sensitive floor tiles. With these tools you can extract when and where people place their feet during different actions. This information is used to calculate different gait parameters such as:

  • step length (the distance one-foot travels during a single step),
  • step clearance (how much space is there between the foot and the ground at the highest point of the step),
  • side stepping (how far sideways did the foot move during the step).
  • gait symmetry (the differences between the left and right leg).

Some pressure floors have the spatial accuracy to track how a person’s center of mass travels along the foot sole, which in turn can be used to say something about how the subjects place their feet to remain in balance.

It is one thing to have this equipment setup in a lab to have subjects perform some tests. Having a system to measure and estimate these gait parameters from the home of a patient is quite something else. Not only because of the impracticality of setting up the equipment. In a clinical setting there is a lot of control over the environment and potential interfering factors. On the contrary, in the home environment there is almost no control over the environment and the sensors and algorithms still need to work under these conditions.

A possible replacement for the large equipment is a pair of insoles, such as these from Moticon². Each insole has sixteen pressure sensors spread over the bottom of the foot as well as a three-axis gyroscope, a three-axis accelerometer and a temperature sensor. There is also some storage capacity in the insole to save some data locally. When longer recording sessions are needed, the insoles’ Bluetooth can be used to save the data externally.

Estimating distance from acceleration data

The gait parameters we are interested in are all based on distances travelled. However, the insoles do not measure the distance travelled but they measure acceleration. We can estimate the position of the sensor by integrating the acceleration signal over time twice. However, there are a few things we need to consider for it to work.

Firstly, the accelerometer will measure the linear acceleration as well as gravity. This means that even when the sensor is not moving the accelerometer will report 1 g of acceleration. If the gravity component is not removed before estimating the position of the sensor, it would appear to keep falling even when it is just lying on the ground. This gravity influence is more severe than one might expect at first glance. Except for short-lived high accelerations caused by tapping or hitting the floor, regular long-lasting accelerations of a foot or human body are typically far less than 1 g. Therefore, we need to subtract a large vector to measure a small signal, and we risk subtracting slightly too much or less such that the small signal ends up distorted.

Secondly, the sensor measures acceleration in its own reference frame. From the starting orientation of the sensor the forward motion of the foot might be measured in the X-axis but during the step the foot is likely angled down, causing the forward motion to be measured partly in the X-axis and partly in the Z-axis. To easily separate step length, step clearance and side stepping we want the acceleration to be measured in the reference frame of the starting orientation which we use as world reference. We thus need to take the orientation of the sensor into account. We can estimate changes of the sensor’s orientation by integrating the angular velocity measured by the gyroscope over time. Once we know the sensor’s orientation, we can transform the measured acceleration to the world reference frame.

Figure 1 illustrates the difference between the acceleration signal in the sensors reference frame versus the signal represented in the world reference frame. The data is recorded from the accelerometer of an android phone while it lays still on a table. While moving the phone as little as possible it is rotated on one of its sides, kept still for a few seconds so the gravity component is clearly visible. From the sensor’s perspective the gravity vector moves around the sensor and is present in multiple axes as shown in the figure 1a.

Figure 1: Accelerometer data recorded by an android phone. The phone is turned on to different sides to measure gravity in different orientations. figure a (top) shows the acceleration measured in the sensors reference frame. Figure b (bottom) shows the same data transformed to the world reference.

In the first five seconds in figure 1, the acceleration is largest along the Z-axis. This is the measured gravity. After five seconds the phone is rotated onto its side and gravity is now measured in the Y-axis. After transforming the signal to counter the rotation of the sensor, the acceleration is measured in the world reference as shown in figure 1.b. In this representation gravity is a constantly measured in the Z-axis, since from the world’s perspective gravity is always a force pointing downwards. We can now remove a large part of the measured gravity by subtracting 1 g from the Z-axis. This method is not perfect; I will deal with the remainder of the gravity component later in this blog.

Reconstructing a step

To test the insoles, we collected some simple data to experiment with. We recorded someone walking a few steps in a straight line on a flat surface. Let us assume we want to estimate the position of the foot at each sample in a single step, to reconstruct how the foot moved through the room to make the step. Figure 2 shows the accelerometer and gyroscope data of the left foot during walking. Having the sensors directly pressed against the sole of the foot has a large advantage compared to other positions. At every step, the foot has a moment when it is not moving, that is, when it touches the ground. At these moments, the sensor will not record any extra movements. This give us this nice and clear repeating signal where we can easily detect when the foot is load bearing. If the sensors are for example moved to the ankle, these periods are already harder to detect since the ankle is constantly moving during the entirety of the step.

Figure 2: Acceleration and angular velocity measured from the left foot during walking in a straight line.

Figure 3 shows the accelerometer and gyroscope data of the left foot during a single step. From this raw data it is difficult to talk about any gait parameters of the step. However, it is important to mention that the peak in the acceleration signal at 8.8 seconds. This is likely the heel contact moment. After heel contact there is some activity in the signal as the foot fully lands flat on the ground and eventually the signal becomes still, indicating the foot is not moving anymore.

Figure 3: Figure a (top) shows the acceleration data measured during a single step. Figure b (bottom) shows the angular velocity measured by the gyroscope during a single step.

The simplest method to estimate the foot’s position is to transform the accelerometer signal to the world reference, remove gravity as discussed above and then integrate the signal over time twice. First the orientation of the sensor is estimated by integrating over the angular velocity as measured by the gyroscope. Figure 4 shows the estimated orientation of the sensor, as well as the main problem with this method.

Figure 4: By integrating the measured velocity over time, we get the orientation of the sensor. The orientation is represented by the number of degrees rotated around each axis since starting the recording.

Sensors are not perfect and have noise. If this noise had a mean of zero, the problem would be a lot less present. However, many sensors are influenced by temperature and the input voltage. This gives them a bias that slowly changes over time as temperature changes or batteries drain. In the case of a single step, the recording time is short enough to consider the bias constant, but unknown. Manufacturers have calibration charts to calculate this bias and correct for temperature or voltage differences. Some of the more expensive sensors have extra sensors build in and compensate this bias automatically. However even then there is a good chance you still must deal with a bias, though a smaller one. Figure 4 is an example of this problem. After the foot has landed again it is actually back in the same orientation it started in while this is certainly not the case in the estimated orientation. The orientation estimate suggests that the foot is rotated to some degree along all of the axes. For now, we will just continue to see the impact of this bias on our position estimation.

Since acceleration is the derivative of velocity, integrating the acceleration signal over time will give us the velocity of the sensor. After transforming the acceleration signal to the world reference and removing the gravity component, we integrate the acceleration signal over time to get the velocity. Figure 5 shows the resulting velocity. According to this figure the foot just keeps moving faster and faster. However, we know that at the end of measurement the foot is stationary on the ground. After not even a second of recording the bias created an error of more than seven meters per second.

Figure 5: The sensors estimated velocity in the world reference.

Just to show the severity of this error, we integrate the velocity over time to get the sensor’s position, which is shown in figure 6. This figure shows that the foot moved sideways for almost 2 meters and downwards for about 3m. We know however that the subject was walking in a straight line on a level floor and thus that both these values should be close to zero. Since a typical step is just short of a meter, clearly the gait information would completely be lost using this naïve approach.

Figure 6: The estimated position of the sensor in the world reference.

Applying the ZUPT algorithm

One of the ways to remove the bias is to use the zero-velocity update algorithm (zupt algorithm)¹. This method requires the sensor to periodically stand still. As we integrate the acceleration over time, the bias builds up causing the velocity to be more and more inaccurate. If we know the true velocity of the sensor, we can calculate the error between the estimated velocity and the true velocity. This error is the accumulated bias over the samples. Since this bias can be considered constant for short time periods, we can model the bias as a linear function and remove it from the signal.

It is relatively trivial to detect when an accelerometer is standing still. This can simply be done by looking for a period where the combined magnitude of the acceleration in all three axes is equal to 1g give or take the sensor noise. In theory, this can also occur when the sensor is moving at a constant speed. However, unless you are working with precise actuators that are made to move at constant speed, you will find that as soon as the sensor is moving its reading will not be constant. Especially when recording data from human gait, the movements will not be constant enough to not measure any acceleration.

In the case we are considering in this blog, we have isolated a single step. That means that we do not have to automatically detect when the sensor is standing still. In this case we know that our sensor is still at the beginning and end of the step. This same trick can also be used when estimating the orientation of the sensor. We know that the subject is walking in a straight line on a flat surface. Therefore the orientation at the end of the step should be the same as the starting orientation. Since we use the starting orientation as reference the step starts at zero degrees rotation around all axis and ends with zero degrees rotation around all axes. Getting a better estimate of the sensors orientation causes more of the gravity component to be isolated resulting in a clearer acceleration signal.

Figure 7 shows the sensor’s velocity after removing the bias from both the sensor’s estimated orientation and velocity. Not only does the velocity no longer drift away from zero but the shape of the graphs corresponds more with what we expect from a footstep.

Figure 7: The estimated velocity of the sensor in the world reference after applying the zupt algorithm.

The velocity along the x-axis does not change sign for most of the step, showing the foot is mostly moving in one direction. This would thus be the main forward motion of the step. The velocity along the y-axis remains close to zero, suggesting a series of small correcting motions. We may therefore assume that the y-axis points to the side of the foot. Even though the subject tries to walk in a straight line without moving his/her feet side to side too much, humans are not perfect. This results in the small sideways velocity along the Y-axis. Finally, the velocity in the Z-axis shows the foot moving in one direction at the beginning of the step while moving in the opposite direction at the end of the step. This behavior agrees with the foot lifting off the ground and being placed down again.

Integrating these velocities over time gives a much more reasonable position estimation. Figure 8 shows what the estimates look like. You may notice that by the end of the step the foot has travelled about -0.45 meters or -45 centimeters. In this case we are not so much interested in the sign of the distance. The sign nearly indicates the direction in which direction the sensor points. Though I agree it would be more intuitive if a positive distance means the foot moved forward, this sensor is facing with the positive side of the X-axis backwards.

Figure 8: The estimated sensor position in the world reference after applying the zupt algorithm.

The estimate indicates we measured a step length of 45 cm. This is on the short side for a step length and likely an underestimation. The estimate of the movement in the Z-axis looks very promising. We see the foot is lifted about 10 cm. Then we see the foot drop a little as it swings forward. The little arc the foot makes while swinging causes the foot to be slightly lowered. And finally, the foot moves down to be placed back on the ground at a height of about -4 cm.

Conclusion

Applying the zupt algorithm has drastically improved our estimation. The naïve method resulted in a scenario where our foot moved 2 meters sideways and 3 meters up or down without returning to its original height. Though our floor might not be perfectly level, it is impossible that the elevation increases 3 meters in a single step.

The results from using the zupt algorithm are also not perfect. The estimate still suggests that the foot lands 4 cm into the ground and while the test subject did make small steps, 45 cm seems to be an underestimation. However, this is a good starting point from which we can improve.

For now, we have separated the steps by hand. In the future this is ideally done by an algorithm to detect when the accelerometer is stationary. This might be as trivial as finding periods of time where there is no significant acceleration or a more complex solution combining the accelerometer and the pressure sensors together. If this detection is inaccurate, then our estimate for the accumulated bias will be wrong which leads to inaccurate position estimates. Therefore, improving this detection might improve the accuracy.

We have also been working under the assumption that the subject is walking in a straight line on a flat surface. This is great because then we can apply the zupt algorithm to get a good estimate of the sensor’s orientation, which helps to correctly remove the gravity component. For real world applications you want to get rid of this assumption. People might be walking on hills or ramps or the street is just crooked. If we know that the sensor is stationary, we know that the accelerometer is only detecting gravity. Using the direction of gravity, we can work out the orientation of the plane the subject is standing on. We can then use this more directly measured orientation as ground truth to estimate the accumulated bias, instead of saying the orientation should be the same as the starting orientation. This should make the position estimation more robust for measuring outside the lab.

It has been an interesting project. Though there is a lot more work to be done to measure balance, this is a good starting position. I did expect that the zupt algorithm would improve the results, because adding domain knowledge usually does improve the results. However, I am surprised with how much the results improved with just the assumption of the final velocity and orientation. I am curious in reading feedback and questions of you all about this project. You can contact me in the comments below or at: Dylan@orikami.nl

[1]: Khairi Abdulrahim (March 2014): Understanding the Performance of Zero Velocity Updates in MEMS-based Pedestrian Navigation https://www.longdom.org/open-access/understanding-the-performance-of-zero-velocity-updates-in-memsbased-pedestrian-navigation-0976-4860-5-53-60.pdf

[2]: Moticon insoles: https://www.moticon.de/insole3-overview/

--

--