Symbionic Project after 7 months: Struggling Onwards

Matthijs Cox
Symbionic Project
Published in
5 min readAug 3, 2018

If you’re not failing, you’re not pushing your limits, and if you’re not pushing your limits, you’re not maximizing your potential.
- Ray Dalio in Principles

We’ve certainly struggled in the past period, but we’ve also got some great results. Here’s the summary:

  • Acquire budget from management (6 weeks)
  • Complete financial transaction with Chinese startup (4 weeks), which took a lot longer than imagined
  • Myoelectric Armband arrived! (1 week)
  • Understand how to get Armband to work (2 weeks)
  • Gather data with armband (ongoing)
Here’s the OYMotion gForce armband, finally arrived, yay!

In parallel we did the following:

  • Continue algorithm development and data processing
  • Investigate real time data acquisition from the armband
  • Draft Data Privacy (GDPR) forms for collecting data
  • An eager student requested to start an internship with us!
  • New member: Israel. Who started to build a virtual hand to control
  • Started writing introduction documents and info for new members, including a Python data science tutorial (see github), with data I made available publicly.

Are we still on track?

After everything that has happened, we wondered if we were still on track to accomplishing our goals. Therefore, we asked ourselves: did we fail or not? But then we realized we couldn’t answer this question easily. So we had to discuss again what we were actually trying to prove.

After reviewing our early beliefs and brainstorming a bit, we think our current hypothesis is as follows:

If it is easier to learn to use bionic hands accurately, amputees will use them more and live happier lives.

People often disgard their bionic hand or arms, because the discomfort of the device does not weigh up against the (small) benefits in daily functionality. The discomfort arises in physical form (uncomfortable sockets) but also in a more mental form, learning to control the bionic hand takes a lot of effort, because it’s not an intuitive, fun or natural process.

We’ve set out to prove our hypothesis, with the smallest possible effort (a minimum viable product), by buying an existing sensor armband and showing an easy to use learning environment can be built that detects your gestures accurately, which can then actuate a bionic hand.

OYMotion seemed our best starting point for this. They have an armband, and are working themselves on a learning environment. So we can start by trying this out, and then improving their algorithms or methods where needed.

All in all, we think we’re still on track.

Usability study of the armband

We’ve started to use the OYMotion armband. First on ourselves, and then on several of our colleagues. Obviously, we want to ask others, especially amputees, to try it out now as well. Then we can learn as quick as possible from real life cases, not just from our literature studies and expert interviews.

Our colleague using the armband and learning application.

The OYMotion product comes together with it’s own learning application, where you are asked to wear the armband and perform the gestures several times. After it uses machine learning to improve the gesture recognition of your personal muscle signals. You can see the process here: https://www.youtube.com/watch?v=mMPr2MOXgfk

We have all tried out the application and used it to personalize the gesture recognition. After the learning phase, you can try out the performance by yourself. Here’s a movie of me trying out the gesture testing environment:

Here’s a movie of me trying out the android app, controlling a virtual hand:

Learnings from data gathering so far:

  • OYMotion’s algorithm accuracy reduces when taking armband off after training and putting it on again. How to be robust for that? Do we need to be? Maybe a prosthesis is always placed in the exact same position? We will run some more tests with repositioning the armband.
  • OYMotion’s algorithm accuracy also reduces when switching fast between gestures. We believe this is inherent to the training data, where there is always a resting period between gestures and it never switches from one gesture to another. Do we need to solve this?
  • The first training and test is magical to users, after that it quickly becomes tedious to do more training. Do we need to solve this? Should it be more gamified to be fun?

Business development

If we were let go into the wild, how would we create a sustainable solution? I’ve been thinking a little bit about how we are positioning ourselves in the existing market. Currently we focus fully on a software solution, while other parties are (commercial or open source) are either trying to come up with a fully integration solution (sensor + software + arm) or tackling parts of this problem. Most new entrants, like the startup from Open Bionics, focus on realizing the same functionality at much lower costs than existing parties.

Our own solution would only work properly if it interfaces smoothly with the existing products (sensors and actuators) out there. The idea would be to create a single learning environment with it’s own domain model, and you create custom interfaces with each possible party. My own high level system architecture would than look something like this:

For a first minimum viable product we would now focus on interfaces with an EMG device (OYMotion) and actuating either just a simple virtual hand on a laptop, or a real bionic hand from another party like the Bionico hand from My Human Kit. However, I think to validate our hypothesis as quickly as possible a virtual hand is acceptable for now.

We want to have some demonstration capability by the end of the year, so we are constantly thinking how to get there as soon as possible with our limited amount of resources.

--

--

Matthijs Cox
Symbionic Project

Nanotechnology Data Scientist, Proud Father and Husband, Graphic Designer and Writer for Fun, Searching for some Wisdom