The future of Personal Machine Learning

Marti Planellas
5 min readNov 30, 2016

A lot has been said lately about Machine Learning and the implications it will have on our future.

The applications in use at the moment are fairly generic both on the domain they approach and on the context, they try to solve the same problem for everyone, for example facial or object recognition, natural language processing or any other classification problems, they take a large base of data and construct a model for everyone to use. Assistants and chatbots are beginning to scratch the surface by bringing a bit more context, but the extreme personalisation it could potentially achieve still remains far ahead.

The case of the keyboard

SwiftKey is a keyboard app for Android and iOS, it uses predictive technologies to guess which words you want to type next, or how you normally swipe to represent a word. The algorithm learns with experience, and the more you use it the more accurate it is.

I’ve been using SwiftKey for many years now, and I can say from experience it has become quite smart. Because it learns from what I type, things like my email, my postcode or anything really that I have to write a lot day to day, it’s one tap away as soon as I type the first one or two letters.

One of the limitations the app has is that — currently on iOS — only 2 languages are allowed at the same time. I happen to be raised bilingual (I speak Catalan and Spanish natively) and I live in the UK, therefore I use English a lot as well, so I’ve chosen to use Catalan and English as my keyboard languages since they are the ones the I use the most. But I still write in Spanish with some of my friends and SwiftKey doesn’t provide any support for that. The reality is that I’ve typed so many Spanish words already that I’ve almost taught the algorithm a new language from scratch (or at least the words I use the most).

So basically I’ve spent lots of time teaching this algorithm how I write, the words I use the most, and even lots of words in a language it doesn’t know. This is data that probably only applies to me and it’s not valuable to anyone else, but despite all this — frustratingly — I don’t own this model, I can’t take it with me and apply it to something else, because this information lives locked into one app. If I decided I wanted to change to a different keyboard — for example I tried Google’s one — I would have to start over and I would’ve lost years of personalisation.

I don’t own this model, I can’t take it with me and apply it to something else, this information lives locked into one app

The case of assistants

This prompted me to think about how this might happen in other areas, for example the rapidly rising world of A.I. assistants.

If I choose an A.I. assistant of one or other brand — Siri, Google Assistant, Alexa, Cortana… take your pick — that would probably mean marrying to that brand practically for life.

When these A.I. assistants stop applying general crowd mined models for everything and start customising a model for each user — Google has already started doing this — that will mean that the more you use it, the more you train it, the more you will depend on it, and ultimately you won’t be able to switch to a different one on the future.

All the big companies are developing A.I. assistants

The only way to prevent this would be if these companies provided an export/import method that would allow your model to be moved from one system to another. Only then our time spent on, and our continuous use of these apps will bear fruit we can take with us anywhere we want.

To be able to do that there would need to be a standard format to share this information, and here’s where it gets tricky, standards have a way of multiplying, where everyone thinks the current standards fall short and they can do it better. Ideally a governing entity would act as a referee to define and enforce these standards.

In this ideal world, we should be able to own the intellectual property of the models we generate — as long as they’re customised to us — and have the freedom to switch assistants as we switch banks or homes.

The race for A.I. assistants is going to be fierce and frantic in the years to come. Existing companies and new ones will compete to create features that will attract more users and push the technology further.

Different assistants probably will differentiate themselves by specialising on different areas, for example if you have lots of meetings and struggle to keep track you may want an assistant expert on setting and managing your calendar. Or maybe you would prefer an assistant with an expertise in scouting news site for your interests and presenting you the best news on your favourite subjects.

Your preferences might change over time, so the solution might be to have an army of specialised assistants, or keep switching from one to another as your needs change.

To truly do that with peace of mind we should be able to take everything an assistant has learned from us and transfer it to the next, just like a human assistant would do; thus increasing and nurturing our model and ultimately have any future assistant know you better than all the previous ones together.

I think Open AI is a good start to achieve that goal, and more companies are signing up for it. Being a non-profit organisation and taking the whole humanity as their target will probably put them on the best possible position to become an entity that could create standards and force for-profit companies to collaborate between them beyond the profit they could obtain from our data, and maybe even provide a seal of approval that would allow to quickly identify compatible applications.

For now we just need to be extra careful when choosing new tools that make use of A.I. until that communication is in place, and maybe not invest too much time or data just yet.

--

--