What is Artificial Intelligence? Definitions for policy-makers and non-technical enthusiasts.

When I started researching artificial intelligence policy, I was a bit puzzled that there was no single generally accepted definition of AI. One AI expert told me the term ‘AI’ is just a catch-all for cognitive computer programs that are new and unproven. Once these programs become commonplace, we cease to view them as AI and simply call them web searches, mapping software or shopping recommendations. Clever, but not all that helpful for understanding AI.

If you ask ten AI experts what AI is, you’ll get ten different answers and they’re all correct! This shows how complex the field is and how many facets there are to AI. But this complexity generally isn’t useful to policy professionals who don’t have the time to extensively explore these details. So here are a set of short, simple artificial intelligence definitions for non-AI experts.

The Definitions

Artificial intelligence is popularly thought of as intelligence exhibited by machines. I call this the Intelligence Model, and while it’s absolutely correct, it is not how most AI technologists think about the subject. Here are a couple of examples of AI definitions using this Intelligence Model:

Intelligence Model

AI is an evolving constellation of technologies that enable computers to simulate cognitive processes, such as elements of human thinking (but also non-human forms of cognition); or

AI is systems that think like humans; or systems that act like humans; or systems that think rationally; or systems that act rationally.

These definitions work fine, but they are actually both too narrow and too broad to be useful by themselves. AI technologists take a different view from the Intelligence Model and think of AI as a discipline or field of problems to solve like physics or chemistry. That perspective is summarized below:

Discipline/Field Model

AI is a discipline centered on creating machines that can make decisions well under uncertainty; or

AI is a field centered on problems of designing agents that perceive and act to satisfy some objective, often without being explicitly programmed how to do so.

You can see how the Intelligence Model might not classify a robotic system as AI if it doesn’t seem ‘intelligent’. But the Discipline/Field Model could see the same bland robot as an agent that‘s making a decision or satisfying an objective, and therefore a topic for AI research. The Discipline/Field Model focuses less on subjective definitions of ‘intelligence’ and more on perception, decision-making, and autonomously accomplishing outcomes.

We don’t want to get hung up on this, so just remember: AI can be both software/machines that exhibit intelligent behavior, or the field that studies autonomous decisionmaking and actions.

There is a sub-discipline of AI called machine learning, and a sub-discipline of machine learning called deep learning.

Machine learning extracts patterns from unlabeled data (unsupervised learning) or efficiently categorizes data according to pre-existing definitions embodied in a labeled data set (supervised learning). What you really need to know is that machine learning allows computers to learn without being explicitly programmed. You can feed these systems very large data sets — usually the bigger the better- and they will find hidden relationships in the data and improve their performance over time. But categorizing and tagging the data through supervised learning often works better, especially with smaller data sets.

Deep learning is a type of machine learning that uses additional, hierarchical layers of processing (loosely analogous to human neuron structures) and large data sets to model high-level abstractions and recognize patterns in complex data. Deep learning systems are especially good at extracting patterns from complexity. (DARPA has a great, simple video that shows the three generations of AI, explains neural networks and shows how AI systems fold and separate data to extract relationships).

As a side note, I have also seen very little big data work that doesn’t also involve machine learning. The two go hand in hand these days.

I also love DARPA’s video on the three waves of AI. This is well worth a watch. Many people don’t realize that tools like Turbotax are actually AI systems. DARPA talks about three waves of AI

Wave 1: Handcrafted knowledge —Good reasoning over narrow domains, but no learning or handling of uncertainty. ( e.g. Turbotax)

Wave 2: Statistical learning —Good classification and prediction, poor context and reasoning. (e.g. Siri, Alexa (machine learning))

Wave 3: Contextual Adaptation — AI tools will build and improve models to explain decisions. This is the future of AI.

And there will be certainly be waves 4,5,6 and onward…

More terms you might hear:

Narrow AI is an expert system in a specific task, like image recognition or playing Go.

Artificial General Intelligence (AGI) is an AI that matches human intelligence.

Artificial Superintelligence (ASI) is an AI that exceeds human capabilities.

A Narrow AI is a system that excels in one specific area, like recognizing images, delivering personalized news recommendations, or trading stocks. They’re specialized tools. All of the AIs we have now are narrow AI.

AGI or ASI — also popularly thought of as sentient AI — are still science fiction (see the movies Her, Ex Machina, Matrix, and Terminator). Elon Musk, Nick Bostrom and Stephen Hawking like to talk about the dangers of superintelligence, and they are potentially real concerns, but not near-term concerns. (A good discussion of superintelligence for laymen and newbies is on Wait But Why?).

Philosopher John Searle (in his Chinese Room paper) argues quite eloquently why programs will never give computers understanding or consciousness. I wrote a story about how you don’t actually need a sentient AI for mayhem — a really capable and evil optimization algorithm could be terribly threatening. But some AI researchers argue the risk from a rogue AI is a red herring and we have much more pressing and immediate AI concerns, (like bias, data privacy, computational propaganda, job displacement, and others).

But AGI and ASI may not be sci-fi for long. Ray Kurzweil — one of the fathers of AI — still stands by his prediction that AIs will be as smart as people in 12 years. Another point Kurzweil likes to make is that we already have billions of AI systems in the world. You can see how complex and diversified the machine learning field is in this Bloomberg Beta Chart that shows a sample of machine learning companies and industry areas. These technologies already drive many of our online experiences and are increasingly utilized in every industry in the world.

AI is a complex topic, and hopefully this article has helped to simplify it for non-experts. If you have any suggestions for additional definitions or missing terms, please let me know in the comments.