Explain AI to me like I’m a 5-year-old…or a CEO

integrate.ai
6 min readJul 19, 2017

--

There are lots of articles about Artificial Intelligence, but it’s pretty hard to pin people down with a definition. So in this article, I’ll pull out the best definitions of AI from the Harvard Business Review. The HBR understands the context for C-suite executives well and CEOs of big companies usually have decades upon decades of experience.

Our goal: Explain AI to 5-year-olds, to CEOs, and maybe to 5-year-old CEOs

But what if we swing to the opposite end of the spectrum — to people who have only been alive for half a decade? In the second part of the post, I’ll take data from child language acquisition studies and work out a definition of AI made up exclusively of words that five-year-olds understand. (These last paragraphs wouldn’t work at all.)

Explain AI to me like I’m a CEO

As an AI startup, we often encounter people trying to piece together what AI could mean for their business, and it’s something we spend a fair amount of time discussing at conferences and with our partners. Making emerging tech more accessible is a project that makes good business sense and is an ethical imperative. One such project is the Tech 2025 series. In fact, on July 26th, Charlie Oliver is bringing her hit program to Toronto, featuring our VP of Product & Strategy, Kathryn Hume: Explain It Like I’m 5: What’s the Difference Between AI, Machine Learning, NLP, and Deep Learning?

AI technologies learn patterns. That’s what make them powerful for doing things like, say, driving customer engagement. Consider all the ways your company can interact with customers — which ones actually work with which people? AI systems can figure out how to distinguish good customer journeys from bad ones in order to build better relationships. But that’s a fairly integrate.ai-centric example. Let’s look for a more general business definition.

Andrew Ng is a particularly well-established leader in the field of artificial intelligence. He’s a professor at Stanford and (until recently) the chief scientist at Baidu, so he very thoroughly connects research and business.

Writing for the Harvard Business Review at the end of 2016, he summarized AI in terms of its core use cases:

Almost all of AI’s recent progress is through one type, in which some input data (A) is used to quickly generate some simple response (B).

For the more practically oriented, here’s a useful rule-of-thumb:

If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.

A relatively small number of companies do a great deal of AI research — companies like Baidu, Google, and Facebook. But there are a lot of companies that can benefit from these technologies without doing AI research themselves. As Kristian Hammond, Chief Scientist at Narrative Science, writes (begging you not to hire a Chief AI Officer):

As the market has matured, AI is beginning to move into enterprises that will use it but not develop it on their own. They see intelligent systems as solutions for sales, logistics, manufacturing, and business intelligence challenges. They hope AI can improve productivity, automate existing process, provide predictive analysis, and extract meaning from massive data sets. For them, AI is a competitive advantage, but not part of their core product.

A theme throughout discussions about AI in business circles is automation. That’s important — AI technologies have enormous political and economic consequences if we, for example, were to replace the majority of long-haul trucking jobs in the near future.

But let’s look at the notion of automation. If you’re the kind of CEO who sees workers primarily as expensive cogs in the corporate machine, well of course you want to automate them away. This is a myopic and inhuman view of what happens in businesses. It keeps you focused on efficiency and misses the main opportunity, which is helping humans be more creative, strategic and efficient. Building off their book, Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, Thomas Davenport and Julia Kirby write:

Intelligent machines…do not usher people out the door, much less relegate them to doing the bidding of robot overlords. In some cases these machines will allow us to take on tasks that are superior — more sophisticated, more fulfilling, better suited to our strengths — to anything we have given up. In other cases the tasks will simply be different from anything computers can do well. In almost all situations, however, they will be less codified and structured; otherwise computers would already have taken them over.

Explain AI to me like I’m five

Okay, so it should be clear by now why most CEOs are paying attention to the potential of AI, but what if we want to explain AI even more fundamentally? What if we want to go from explanations that are meaningful to business leaders to … pretty much anyone?

There’s a great subreddit focused on making explanations easy-to-understand, it’s called Explain It Like I’m Five. But if you take the explanations of AI in the subreddit, you’ll find that they are usually made up of words that five-year-olds don’t know. The problem starts right off the bat: native English speakers probably don’t understand intelligence until they are about eight or nine years old and then don’t understand artificial until they are about 10.

The data I’m using comes from a range of researchers, helpfully compiled by Marc Brysbaert. Some of the words you can use to describe AI that people commonly use are people, thing, game, better, take, find, rule, human, question, work, answer, different, brain, able.

But beyond that, and you’re largely out of luck.

Data from language acquisition studies show a lot of words associated with AI won’t work for a 5-year-old

Here’s the heart of Andrew Ng’s earlier definition:

some input data…is used to quickly generate some simple response

Input is out (9–11 years old) and so is data (10–11 y/o). Most researchers put quickly outside our range but a few have it in, so we’ll keep it. Generate happens when kids are almost 11. Simple doesn’t happen til 6 and response is acquired somewhere between 7 and 9 years old.

Obviously “some … is used to quickly … some” is a terrible definition. It’d be great if we had a synonym for data, but even information doesn’t get acquired til around 7 or 8. But one of the strongest themes in the Reddit data is around questions and answers and that’s pretty useful.

So how about the following?

AI is when you make a computer like a little brain. You help it to learn by giving it a lot of words and pictures and numbers. If the computer hears you answer a lot of questions, later on it can quickly answer your questions. But it only knows what you show it and tell it, so it’s not as smart as you are.

The only word in here that the Brysbaert data doesn’t support is computer, but other child language data sources (like CHILDES) show that at least some four-year-olds know and even use computer.

Have a five-year old to try it on? Send us their response — ideally in video or audio! (I guess if you’re a CEO, feel free to send us videos, too?)

— Tyler

Tyler Schnoebelen (@TSchnoebelen) is principal product manager at integrate.ai. Prior to joining Integrate, Tyler ran product management at Machine Zone and before that, founded an NLP company, Idibon. He holds a PhD in linguistics from Stanford and a BA in English from Yale. Tyler’s insights on language have been featured in places like the New York Times, the Boston Globe, Time, The Atlantic, NPR, and CNN. He’s also a tiny character in a movie about emoji and a novel about fairies.

--

--

integrate.ai

We're creating easy ways for developers and data teams to build distributed private networks to harness collective intelligence without moving data.