Myotera Blog Post #2: Monitoring Movement From Home

Aya Fattah
MedLaunch
Published in
13 min readJan 21, 2021

Mazen Dbouk, Marco Garcia, Sahil Gupta, Aashish Harikrishnan, James Jiang, Reagan Miller, Daniel Najarian, Matt O’Brien

20 January 2021

THE PROBLEM

In 2013, it was reported that 5,357,970 people in the US were living with some form of paralysis. This paralysis, also known as hemiparesis, is most commonly caused by stroke, brain infections, brain trauma, genetics, or brain tumors. Hemiparesis is defined as a condition that leads to paralysis on one side of the body and leads to serious medical complications as it can affect multiple parts of the body, including the face, arm, legs, or an entire side of the body. Every year, there are 795,000 people in the US that suffer from strokes, the leading cause of hemiparesis, with approximately 80% of those people suffering hemiparesis.

Hemiparesis can lead to loss of balance, difficulty walking, impaired ability to grab objects, a decreased movement precision, muscle fatigue, and lack of coordination. As a result of these difficulties, hemiparesis often leads to a learned non-use, where an individual learns to not use the weaker arm to complete tasks. This can lead to muscle atrophy, where there is a substantial loss of muscle due to lack of use.

Current Solutions

There are no easy-to-use and reliable ways to quantify progress in physically rehabilitating someone who suffers from hemiparesis. Especially in the midst of a global pandemic, the need for a solution that can work from home rather than being forced to go to a clinic becomes apparent. From our market research, some options include camera-based systems, optical linear encoders, and mobile apps as far as stroke rehabilitation. All such options have issues in their function that we want to solve.

Camera-based system: These systems are complex in terms of computation. Although they provide data that can be very accurate, these systems can be hard to use within the average household. They are also limited as far as the mobility of the system as a whole, meaning the user cannot leave the area that the system is set up in.

Optical linear encoders: Optical linear encoders only fulfill one function. Their primary use is to measure absolute distance, which we could use to calculate joint angles, but they do not provide any information about where, which limits the types of motion data that we want to record and present to the user.

Mobile apps: Current apps that assist with stroke rehabilitation tend to fulfill only one purpose — neurological rehabilitation (i.e. helping to rebuild cognitive, speech, and language functions). Myotera, on the other hand, provides a mobile companion for our movement sensor. The app not only displays metrics of rehabilitation progress (e.g., Limb Activation Equality), but also provides an interactive and encouraging experience for the user through achievements, badges, and relevant reminders.

OUR PROCESS

From the beginning, we knew there was a need for patients to be able to monitor and quantify movement of hemiparetic limbs with an easy-to-use and portable system. Because of this, we chose to take on an assistive rehabilitation role, where we monitor patients and present relevant data to them, rather than being directly involved with physical therapy and rehabilitation. To achieve this, we decided to use a complete hardware and software solution, where we can record motion data from the affected limb via sensors, process the data through an app, and safely store that data using cloud storage for the user. First, we needed to decide on what hardware we should use, as well as what data metrics to compute.

Hardware

We decided to use a Movesense sensor because of its streamlined API and affordable price. In terms of sensors, Movesense has everything that we may need, including an IMU, which records accelerometer data, gyroscope data, and magnetometer data.

https://www.movesense.com/product/movesense-sensor/

Figure 1: Movesense IMU we are using

For our configuration, research shows that some data metrics we want to compute from IMU data require more than one sensor, each placed in a specific part of the affected limb. The key data metrics we need to compute to implement our key features are joint angles and sensor trajectories. Below is an image that shows the configuration we plan on using for a hemiparetic arm, where there is a sensor on the wrist and a sensor on the upper arm, just above the elbow.

https://www.mdpi.com/1424-8220/19/6/1312

Figure 2: The configuration we want to use

App Design

The Myotera mobile app will initially launch on iOS. As a multiview app, Myotera will enable users to access both short term and long term aspects of their stroke recovery, through an intuitive, yet inspiring medium.

The primary views (screens) consist of:

  • Dashboard — a center view of the most recent measurement of whatever the user chooses to display as their “default” progress metric. This view is also accompanied by an insightful suggestion for a big-picture idea of what to focus on during rehabilitation at that stage. Additionally, the dashboard provides quick access to adding new recordings (‘+’ icon), editing user preferences, and scrolling through secondary measurements.
  • Progress — a scrollable line graph visualization of the patients’ stroke recovery data to date. This view will provide a multi-level glance at a data metric of the user’s choice (e.g., Limb Activation Equality ), as well as expandable buttons providing more detailed descriptions of any recorded metric.
  • Tasks & Achievements — an inspiring display of badges earned for breaking records and achieving milestone levels of progress, as well as a to-do list of outstanding tasks.
  • Exercise Catalog — a complete repository of exercises with direct links to online videos on the purpose, proper form, and progression models associated with them. The user may also create custom exercises as per assignment by their physician.

Data integration between the Movesense-powered wearable device and the mobile app will be key in providing a seamless end-to-end stroke recovery solution. Upon establishing a wireless connection between the Movesense sensor and iOS device, data will be pulled from the sensor in JSON format using a REST-style API and be processed within the application as various data metrics. These metrics are responsible for taking the raw data measured by the sensor and displaying it as semantically meaningful information, placing the patient in tune with their recovery progress in an easily understandable way. Tentative metrics may include Movement Dissimilarity (in opposing hemiparetic limbs), Muscle Activation, and Power output (in various axes).

CHANGES FROM DESIGN REVIEW 1

Since Design Review 1, we have made fairly big changes to our design in the hardware and software departments.

On the hardware side of things, our previous design only used a single sensor to track all of the patient’s motion data. We realized after Design Review 1 that this would not be enough, as some of the important data metrics we wanted to record, such as joint angles, would not be able to be calculated with only one sensor. The ability to calculate joint angles, get sensor trajectories, and take some measurements of lower and upper arm length will allow us to create a model of the arm’s trajectory as a whole, rather than just looking at the sensor.

https://bijosebastian.wordpress.com/imu/

Figure 3: Visual of how we could model an arm with

our sensor configuration (excluding the trunk sensor)

In the software world, our changes were primarily adjustments to using two sensors. We began development of joint angle calculation algorithms, made adjustments to our app’s user interface to accommodate two-sensor data, and tried to refine our data visualization algorithms.

FEATURES

SinceDesign Review 1, we have begun finalizing some of our ideas for features. As we get a better idea of how the processed data will look, we also get a better idea of the kinds of features that are feasible with IMUs. Below are some high-level details on some features that we plan on implementing with our design, including a 3D heatmap of trajectories, curve scoring of trajectory data, movement classification, reminders, and joint angle plotting.

3D Heatmap: this feature allows the user to visualize their range of motion in a 3D space. By taking measurements of both the lower and upper arm, plotting trajectory data from the sensors relative to the user’s body, and factoring in the joint angle, we can effectively plot the position of the user’s entire arm over the course of one recording session. Seeing this information, we hope this can provide encouragement for the user to extend their range of motion, moving more and more on the boundaries of their heatmap.

Curve Scoring: Using Dynamic Time Warping, we can take two trajectory curves and compare them, both against how similar they are and how similar their time frames are. This allows us to assess the quality of the user’s motion on two criteria:

  1. How similar their motion is to some standard.
  2. How much time it took them to move in some pattern compared to the standard.

The standards must still be determined, but this allows the user to select specific motions or tasks to try, record their motion, and receive a score on how well they move compared to the standard.

Joint Angle Plot: After processing joint angle data, we can safely store this data and plot it over an arbitrary time frame. The time frame can be selected by the user, allowing the user to see how they have been progressing with arm extension.

Movement Classification: Using machine learning, we can classify different types of movement based on the IMU data. We anticipate this feature requiring much longer recording sessions, possibly full day sessions, meaning this feature will be optionally enabled. At a high level, we will be analyzing the IMU data, determining what movement category the data fits into (shoulder adduction, elbow flexion, etc.) and show the user what types of movement they engage in the most and least. We hope users can see they move in a certain category less than others and can improve on that motion over time.

Reminders: This feature is tied very closely to movement classification, and can have two levels of detail.

Level 1: reminders will periodically send a notification to the user, reminding them to move their affected limb.

Level 2: reminders will periodically send a notification to the user during a recording session, naming a specific type of motion that the user is lacking in for that session. Level 2 reminders incorporate movement classification to name the specific types of motion.

IMPLEMENTATION DETAILS

Visualizing Trajectory

We will be using the extracted sensor data in the form of a .csv or .json file to establish a movement trajectory. The trajectories allow us to assess the efficiency of the limb motion in each arm and be able to analyze the data using Dynamic Time Warping. The main motivations behind this section of our work are to create necessary variables for future analysis, filter the data to account for noise, and to create a 3D trajectory to aid in analysis.

Our sensor gives us 9 Degrees of Freedom (accelerometer, gyroscope and magnetometer measurements in the X, Y and Z axis). It is important to note that the measurements are returned in the reference frame of the device and not of the Earth, which prohibits outright analysis of the data without processing and manipulation. Obtaining the orientation of the sensor at any given time is necessary to this end.

The first step in accomplishing this is to calibrate the magnetometer, which could be affected by local magnetic sources and electronics. Then, using a sensor fusion algorithm known as a Madgwick Filter, we will be able to compute orientation, which is represented in the form of a 4 valued vector known as a quaternion. Using the orientation data, we can subtract out the effect of gravity on the accelerometer data, which we can then obtain position from by double-integrating. Our current setup does not include the magnetometer calibration, and uses a Kalman filter rather than a Madgwick filter, which is similar but optimized differently.

Converting our acceleration data to position results in integration drift, which accumulates over time and causes our measurements to deviate from their true values. We cannot remove drift entirely, but we can try to minimize it by using clean, processed data, potentially integrating a GPS or optical sensor to the system, or simply relying on the relative position between two sensors on the same arm with one fixed in space for plotting.

Additional and/or alternative filtering of the signal may be required to account for noise. This depends heavily on how noise propagates through the sensor — specifically if it is centralized or not. Using a sample set of data from the accelerometer on a phone, we were able to generate the plot featured below. This shows a trajectory over time, which would be similar to the plots we would ultimately generate for each arm and compare in order to evaluate progress in the rehabilitation process.

Figure 3: The above trajectory is actual data

recorded from our Movesense sensor.

Joint Angle Calculation

In order to quantify the angle for the elbow, we must use our orientation data combined with the raw data from the gyroscope sensor.

First, we create three-dimensional xyz vectors representing the angular velocity, which can be directly taken from the gyroscope readings. Then, we process the vectors according to the equation below, a third order approximation method, where delta t is equivalent to the time increment between the sensor’s readings.

Next, using the euler angles orientation data processing, we can convert xyz data from all of the IMU sensors to ψ,Φ,Θ orientation angles, to be used in the matrix equation below.

Finally, with the processed angular rate vectors and unit-length direction vectors, we can plug the vectors into the formula below to calculate the elbow’s extension angle.

DESIGN REVIEW 2 FEEDBACK

From Design Review 2, we received feedback on both the effectiveness of our presentation and our design.

Presentation

In terms of presentation, we need to narrow the focus of our problem. Rather than framing the problem as tackling the issue of stroke rehabilitation, we can narrow our focus to how the market does not provide an easy-to-use and reliable method of tracking patient data during rehabilitation from home. Thus, our solution is able to fill this niche, giving others a much clearer idea of what exactly our product is supposed to do and how it can help the community. We also needed to properly explain our reasoning for using Movesense over other products like an Apple Watch. Ultimately, this comes down to price as well as ease of development. While Apple Watches provide a lot of functionality, we found using an Apple Watch was restrictive as far as what data analysis we could conduct and how we could present that to the user.

Design

Additionally, we received feedback on our design as a whole, namely the possibility of using a third sensor and addressing how accurate our data really needs to be.

Because patients who suffer from hemiparesis often compensate for the hemiparetic arm by bending over when picking up objects, adding another sensor would allow us to identify this type of movement. Integrating the third sensor on the trunk allows us to measure relative motion of the hemiparetic arm to the trunk. Although this adds another layer of complexity, this is something we are considering implementing if we have enough time.

Another key piece of advice we received from the design review was on how sensitive our data collection needs to be for our features to work. One suggestion made by a technical expert was that we may not need our data collection to be extremely accurate to ensure we can identify certain movements. If we find that this is true after some testing, this could save us a lot of time from having to further refine our algorithms in the future, meaning we could implement more features for our app.

NEXT SEMESTER

For next semester, our focus is largely on determining the sensitivity of our data, getting more familiar with two-sensor data, and app integration with the sensors and cloud storage.

In terms of data processing, we need to determine how accurate our processed data needs to be to get the results that we want. If we find that the processed data is not accurate enough for our purposes, then we will need to put much more time into refining the algorithms we use rather than implementing more features.

For data collection, we have only been collecting data from a single sensor so far. Next semester our goal is to begin collecting data from two sensors simultaneously, applying our algorithms to both streams of data and seeing if anything breaks. This integration might be extremely complex and we may spend much of the semester getting a two-sensor system to work correctly.

Finally, we also need to begin integrating our app with the other components of our project. As of right now, our app consists of just the interface. Our focus on this front for next semester is integrating the algorithms for data processing with our app, as well as Google Firebase, allowing for safe cloud storage of the data we collect.

--

--