My introduction to machine learning through the world of biotech

Raphael Schmetterling
Hackers at Cambridge
4 min readJan 25, 2018

--

Last summer I had the fortune of interning at an ambitious young start-up called Cambridge Bio-Augmentation Systems, or CBAS for short (pronounced ‘Seabass’). It’s one of the first neural engineering companies in the world and they’re building the connection between the human nervous system and computers. That means they are building a ‘USB port to the body’, a next generation connector for prosthetics that connect to the nervous system and can be controlled with a patient’s brain.

What has this got to do with machine learning? The connector picks up on neural signals travelling from the brain to the limb — these signals are simply voltages. CBAS is developing a neural net (machine learning model) which can understand the signals from the brain. If we can understand messages of the brain, we can feed these as inputs into software. The idea is to create a new platform technology for interacting with the human nervous system.

My goal in the summer project was to refine this model by testing it on accelerometer and gyroscope data recorded from motion — a simpler dataset than neural signals. The net would classify the motion into a category such as walking, turning or sitting down. My first, and perhaps most important, task was to acquire the data. In our case, the data had to be generated ourselves!

Walking, Walking, Walking…

Neural nets need data; lots of it. In my case, this meant walking for hours up and down the office balcony with a sensor strapped to my leg, much to the amusement of engineers at the other start-ups. Fortunately, with the help of Spotify, the odd TED talk and the contribution of my fellow interns, this tedious but necessary task soon produced sufficient data.

If the data fed into a neural net is not collected in the right way, getting useful results can become impossible. Several years ago, a neural net successfully managed to identify cars from images of the streets of LA. The Cambridge PhD students then fed in pictures from their more overcast hometown to find that the model was failing completely. It turned out that the net had not learned to identify cars, but the shadows these cars projected onto the ground in the California sun.

Every detail of even my seemingly simple task had to be thought about in advance, such as the exact positioning of the sensor on the body. Factors such as the route taken around the balcony and the size of our turning circle were also considered, as although the model would ideally be able to process any motion, the range of motion was restricted for my proof-of-concept to increase the chance of success. One factor I initially forgot to include — and which my fellow intern picked up on — was the orientation of the sensor. Some of the time, it was upside down, adding an extra variable to the dataset which was reducing performance. Thankfully the data could be fixed with a script and no rerecording was necessary.

Learning on the Job

Before starting this placement, all I knew about was the simple feed-forward neural net. These are essentially functions which take an input such as an image, and produce an output such as a label (‘cat’ or ‘dog’). These models train/learn by looking at tens of thousands of images where the correct answer is known, and tweaking the function algorithmically so that it outputs the right answer more often. They can then look at a new image without knowing the answer, and tell whether it is of a cat or a dog. These nets can be powerful, but they assume that the inputs are all independent of one another. This is not the case with most data which varies with time, including my own motion recordings. No accelerometer reading is independent of the one before. This is where recurrent neural nets (RNNs) come into play.

An RNN is essentially a series of feed-forwards nets, each one with two inputs: the current data point, and the output of the previous net. That net was fed the previous data point and the output the net before that. RNNs are very popular and also used in applications such as translation and predictive text, as languages are great examples of dependent inputs (one word being one input).

Once I’d caught up on the theory, and the particulars of the company’s model, I was ready to start optimising it. This meant tweaking the hyperparameters, a big part of developing neural nets. Hyperparameters are the ‘settings’ on the model which are manually decided before training begins. Take my motion classifier. How many seconds of motion data should be fed into the model as a single input? 1? 5? There is no clear right answer to this and in fact this is one of about 12 hyperparameters on this particular model. Finding the optimum point in twelve-dimensional space is non-trivial, and requires a mixture of intuition and trial and error. Taking the same example, a single input could be taken as half a second of data — the rough duration of a single step. This will likely perform better than a millisecond, but what about 0.4 seconds? The only way to know is to try it.

Final Thoughts

My goal in this placement was to get my first experience of machine learning in the workplace. Learning about bioengineering was fun, but the principles I picked up would apply in any industry. The end result of my work was a classifier which could categorise motion live. The accuracy wasn’t perfect, particularly when transitioning between actions (eg standing up), but good enough to prove the concept. Building this was very satisfying, and has encouraged me to take machine learning courses on my degree. This will combine my new-found intuition with a theoretical grounding. All in all, an experience I really enjoyed.

To learn more about CBAS visit cbas.global

--

--