Why Turing’s legacy demands a smarter keyboard

A sculpture of Alan Turing at Bletchley Park, England

When you start a company, you dream of walking in the footsteps of your heroes. For those working in artificial intelligence, the British computer scientist and father of the field Alan Turing always comes to mind. I thought of him when I did my PhD, when I co-founded an AI keyboard company in 2009, and when we pasted his name on a meeting room door in our first real office.

As a British tech company, today is a big day for SwiftKey. We’ve introduced some of the principles originally conceived of by Turing — artificial neural networks — into our smartphone keyboard for the first time. I want to explain how we managed to do it and how a technology like this, something you may never have heard of before, will help define the smartphone experience of the future. This is my personal take; for the official version check out the SwiftKey blog.

Frustration-free typing on a smartphone relies on complex software to automatically fix typos and predict the words you might want to use. SwiftKey has been at the forefront of this area since 2009, and today our software is used across the world on more than half a billion handsets.

Soon after we launched the first version of our app in 2010, I started to think about using neural networks to power smartphone typing rather than the more traditional n-gram approach (a sophisticated form of word frequency counting). At the time it seemed little more than theoretical, as mobile hardware wasn’t up to the task. However, three years later, the situation began to look more favorable, and in late 2013, our team started working on the idea in earnest.

In order to build a neural network-powered SwiftKey, our engineers were tasked with the enormous challenge of coming up with a solution that would run locally on a smartphone without any perceptible lag. Neural network language models are typically deployed on large servers, requiring huge computational resources. Getting the tech to fit into a handheld mobile device would be no small feat.

After many months of trial, error and lots of experimentation, the team realized they might have found an answer with a combination of two approaches. The first was to make use of the graphical processing unit (GPU) on the phone (utilizing the powerful hardware acceleration designed for rendering complex graphical images) but thanks to some clever programming, they were also able to run the same code on the standard processing unit when the GPU wasn’t available. This combo turned out to be the winning ticket.

So, back to Turing. In 1948 he published a little-known essay called Intelligent Machinery in which he outlined two forms of computing he felt could ultimately lead to machines exhibiting intelligent behavior. The first was a variant of his highly influential “universal Turing machine”, destined to become the foundation for hardware design in all modern digital computers. The second was an idea he called an “unorganized machine”, a type of computer that would use a network of “artificial neurons” to accept inputs and translate them into predicted outputs.

Connecting together many small computing units, each with the ability to receive, modify and pass on basic signals, is inspired by the structure of the human brain. That’s why the appropriation of this concept in software form is called an “artificial neural network”, or a “neural network” for short. The idea is that a collection of artificial neurons are connected together in a specific way (called a “topology”) such that a given set of inputs (what you’ve just typed, for example) can be turned into a useful output (e.g. your most likely next word). The network is then “trained” on millions, or even billions, of data samples and the behavior of the individual neurons is automatically tweaked to achieve the desired overall results.

In the last few years, neural network approaches have facilitated great progress on tough problems such as image recognition and speech processing. Researchers have also begun to demonstrate advances in non-traditional tasks such as automatically generating whole sentence descriptions of images. Such techniques will allow us to better manage the explosion of uncategorized visual data on the web, and will lead to smarter search engines and aids for the visually impaired, among a host of other applications.

The fact that the human brain is so adept at working with language suggests that neural networks, inspired by the brain’s internal structure, are a good bet for the future of smartphone typing. In principle, neural networks also allow us to integrate powerful contextual cues to improve accuracy, for instance a user’s current location and the time of day. These will be stepping stones to more efficient and personal device interactions — the keyboard of the future will provide an experience that feels less like typing and more like working with a close friend or personal assistant.

Applying neural networks to real world problems is part of a wider technology movement that’s changing the face of consumer electronics for good. Devices are getting smarter, more useful and more personal. My goal is that SwiftKey contributes to this revolution. We should all be spending less time fixing typos and more time saying what we mean, when it matters. It’s the legacy we owe to Turing.

The photograph “Alan Turing” by joncallas is licensed under CC BY 2.0.