Take X and add AI

John Henderson
Airtree
Published in
6 min readApr 27, 2015

--

Entering an era of intelligent automation

“The business plans of the next 10,000 startups are easy to forecast: take X and add AI” — Kevin Kelly, Wired

Alan Turing first suggested that computers might be capable of thought back in 1950. In 1997, a supercomputer called Deep Blue beat world chess champion Gary Kasparov. In 2011, IBM’s Watson became the world’s best Jeopardy player. In 2015, a car drove itself across the US.

These feats were powered by machine intelligence — a catch-all term for artificial intelligence, machine learning, deep learning and other related fields. A perfect storm of exponential growth in computational power, bigger data and more capable algorithms has driven progress in these areas rapidly in recent years. Thousands of machine intelligence startups are now emerging, many of which are poised to change the world in the next 5–10 years.

Of the teams and companies I’ve met so far, the most interesting are those which combine expertise in machine intelligence with another technology such as computer vision, natural language processing or speech recognition. If a computer can think, and you also give it the ability to see, read or listen, amazing possibilities emerge. It is suddenly able to complete various tasks that would have previously required a human.

Let’s consider two categories of those tasks: “communication tasks” and “visual tasks”.

Communication tasks

Computers are learning to interpret and produce written and spoken content. At Summly, we used natural language processing (“NLP”) to summarise over 200,000 news articles every day and deliver a personalised stream of content to our users. Siri and Google Now combine NLP with speech recognition technology to allow you to ask verbal questions of a computer and receive spoken answers. Viv and others are developing technology which takes this one step further: they aim to give computers a conversational memory and allow for follow-ups and clarifications. In other words, a proper conversation.

This is an interaction model which has long been envisaged in science fiction and is becoming a commercial reality. It is a fundamental innovation which will allow us to interface with machines that don’t have a natural visual UI such as cars, wearable devices, robots or even your house.

The more immediate (1–5 year) opportunity is the potential automation of all kinds of communication tasks which are currently performed by humans. How can we identify which tasks are ripe for automation and where value will be created? We must look for interactions which follow patterns, and on which humans currently spend a great deal of time. Here are some obvious ones that spring to mind:

  • Medical diagnostics: Your interaction with your local GP largely involves you describing a list of symptoms and them hypothesising the most likely ailment. This is an ideal use case for a smart computer with language skills. It is quite possible that IBM’s Watson will soon be the best doctor in the world. It has access to all up-to-date medical knowledge, is accurate, consistent, and could theoretically be available to anyone 24 hours a day.
  • Scheduling meetings: This has traditionally been a role for secretaries and executive assistants. My personal assistant’s name is Amy Ingram. “She” is the creation of x.ai and almost no-one that I introduce her to realises they are not talking to a human being. You can get a similar service from Claralabs and others.
  • Language learning: Wouldn’t it be great to have a conversation partner that speaks perfect mandarin, doesn’t mind correcting your mistakes, and never gets bored? Or how about a universal real time computerised translator instead?
  • Journalism: Narrative Science and others are training computers to write news stories. Opinion pieces in the Atlantic are still a ways off, but sports results and financial stories written by robots are here already.
  • Recruitment: For the most part, recruiters are essentially intermediaries that connect candidates and employers based on a defined set of features (location, industry, skills, experience). I can’t wait to meet a company using machine intelligence and NLP to identify and curate candidates automatically. The first interview will almost certainly also be conducted by a computer.
  • Online travel agents: Is that ‘real time assistant’ on flightcentre.com who is helping find your hotel in Prague a person or a robot? I’m pretty confident what the answer will be in 2017.
  • Legal tasks: The discovery process in litigation requires armies of junior lawyers to comb through mountains of documents in search of specific information. Groups like Equivio combine machine intelligence and NLP to automate this function, delivering results faster and more accurately than a paralegal at 2am on her 8th coffee of the day.
  • Call centres: Folks in call centres are trained to follow scripts and react to certain situations in specific ways. However, not all remember or comply with the script. How many times have you got different answers from sales assistants when trying to change your phone plan? This strikes me as an ideal task for an intelligent computer leveraging NLP and speech recognition technologies. It’s a massive market if you get it right.

Visual tasks

A breakthrough in computer vision technology occurred in February 2015: machines learned to see better than humans can. More precisely, a computer looked at a series of pictures and classified them (i.e. determined their contents) more accurately than a person would. Combine that level of computer vision (the ability to see) with machine intelligence (the ability to think) and fascinating possibilities emerge.

Various jobs involve a human reviewing an image and making an assessment of its contents. Obvious examples include the airport security official who scans your bags looking for that gun you shouldn’t be taking on the plane, or the radiologist who trains for 5+ years to be able to accurately diagnose illnesses from scans of your body.

I’ve no doubt that these types of tasks will soon be performed more quickly, accurately, and much more cheaply by a computer. The list of possibilities for disrupting existing industries goes on and on. Here are some thought starters:

  • Automotive: Self driving cars are the most well developed example of combining computer vision with machine learning. Maybe don’t apply for an Uber license after all…
  • Agriculture: Why pay someone to drive around huge swathes of farmland if a computer could monitor your crops based on satellite images?
  • Military: Cockpit displays and drones detect objects and use machine intelligence to automatically identify whether they might pose a threat (I’ll bet the CIA’s venture arm has a bunch of unannounced investments in this area…)
  • Infrastructure: Google’s Street View is a database of images of roads throughout much of the world. I’ll bet municipal governments would love to run an algorithm which automatically identified where maintenance was required to roads and buildings rather than sending inspectors out in droves.
  • Medical: Why do I need to go to a doctor to be checked for skin cancers? Shouldn’t I be able to simply take a photo of a freckle and have a computer analyse it instantaneously? The same logic surely also applies to MRI scans, x-rays and many other kinds of other diagnostics.
  • Construction maintenance: Safety inspectors comb over facilities like oil pipelines and refineries for cracks or potential flaws. What if a drone with a camera did a flyover and a computer analysed the footage in real time?

There are thousands of verticals in which inspection tasks could be automated, many of them quite obscure. Through photos, KeyMe is looking to automate the $6bn locksmith market. Tractable is helping plumbers figure out if their pipes are being welded properly!

Taking X and adding AI will fundamentally disrupt a whole bunch of different industries — there are an awful lot of Xs out there. Computers will begin to perform all kinds of communication and visual inspection tasks faster, more cheaply and more accurately than their human forebears. All the examples I’ve given in this post are either happening right now, or will exist in the next 5 years.

Questions or comments? Building a company in this field? Please get in touch!

Thanks to Christian, Mustafa, Miguel, Joe, Bart, James, Sofia, Benji, Abbie and Kim for their input.

--

--

John Henderson
Airtree

Partner @airtreevc. Co-founder @Ldn_ai. Would rather be in the ocean.