What is AI, really?

A cultural and practical introduction for designers

Rebecca West
AI-First Design
20 min readNov 15, 2017

--

This is the first chapter in Element AI’s Foundations Series on AI-First Design (AI1D). Each chapter aims to define the component parts of AI1D in order to create a common language with which to explore this new era of design. You can read the intro to the series here, and sign up to stay tuned for the next chapter here.

As a designer, why should you need to be able to understand artificial intelligence? It’s a term being bandied about so much in media and tech circles lately, a kind of catchall that could be describing anything from virtual personal assistants, robots, sci-fi characters, or the latest deep learning algorithm. Perhaps you work in AI and you have a more nuanced understanding of these distinct fields, or maybe you just sense that your work will be affected in some way by AI in the coming years, but you’re not quite sure how.

With this in mind, welcome to the first chapter in our AI-First Design Foundations Series, in which we aim to nail down the language of artificial intelligence and discuss its many definitions. In doing so we hope to land on an idea of what artificial intelligence is today, from which we can build towards answering: What is AI-First Design?

This chapter is devoted to examining the current AI landscape, and navigating the various definitions AI has seen since the term was first coined. We’ll review the history of AI, examining peaks and lows in popularity, and highlight the major milestones since the recent surge in AI successes. Finally, we’ll examine AI’s many definitions, and some of the challenges we’re up against in coming up with one that everyone can agree on. Heads up: it’s on the longer side, so get comfy, skip ahead to a section that might interest you more, or just read this super short version below.

TL;DR

Rather than starting our examination of AI in the 1950s, our timeline starts much earlier, in Homer’s Iliad, when we were already looking to imbue statues and gods with human-like qualities. Much has happened since then! Today, we’ve reached an all-time high in terms of AI’s rate of advance, funding and enthusiasm, although there is still a wide gap between sci-fi expectations and the realities of what can be accomplished by machines. AI remains very far from reaching human-like general intelligence, but is getting better and better at accomplishing narrowly defined tasks. Here are the main components of how we define AI today and why it matters to you as a designer:

  1. It is largely based on data.
    Recent advances in AI would not have been possible without the huge amounts of data collected by all of our connected devices and the ability to store it.
  2. It is narrow and very focused.
    AI is very good at finding patterns in data and accomplishing specific tasks that we have defined, but doesn’t generalize very well outside of predefined parameters.
  3. It is unconcerned with the outcome of its calculations.
    Unlike the inherent messiness of human decision-making, an AI’s capacity to make decisions isn’t influenced by ulterior motives or how much sleep it got last night, but is solely focused on the task at hand. Since it doesn’t know good from bad, however, any biases that exist in the data are perpetuated.
  4. AI’s abilities are learned, not programmed.
    AI can improve iteratively on its own — without being programmed every step of the way, it can learn from its experiences and improve at making future predictions and decisions, resulting in increasingly sophisticated abilities.
  5. It is an evolving term.
    AI is defined differently by different communities and its definition will continue to change with future advances in technology.

Knowing this, we believe AI will have a tremendous impact on the field of design as we know it. As it begins to influence the design of all businesses, products, services and (user) experiences, it’s essential that we have a fundamental understanding of what we’re working with, and decide how we want to harness its potential.

Still curious? There’s more to it!

Ups and Downs of AI Through Time

Precursors: a wish to forge the gods

Although we usually picture something futuristic when we think of AI, the notion has been around for centuries. Around 750 BC in Homer’s Iliad, for example, the crippled Hephaestus created automata to help him get around:

These are golden, and in appearance like living young women. There is intelligence in their hearts, and there is speech in them and strength, and from the immortal gods they have learned how to do things.

In her book Machines Who Think, Pamela McCorduck describes a host of other creatures that Hephaestus created for various tasks, at least one of which is surely familiar, if a tad menacing: Pandora and her infamous box.

Mechanizing Thought

Beyond these examples in fiction, there were important advances in reasoning and logic in antiquity that led to our current codified language as a basis for all computing. Artificial intelligence, at its essence, assumes that thought can be mechanized and reproduced. Aristotle was one of the first to pioneer organizing thoughts into logical arguments in developing syllogism, which often takes a three-line form, such as:

All men are mortal.
Socrates is a man.
Therefore Socrates is mortal.

The Persian mathematician Muhammad ibn Musa al-Khwarizmi, also known by his Latinized name Algoritmi (from which we derived the word algorithm), is also a key figure in many of the concepts we take for granted in AI today. The word algebra, for example, is derived from “al-jabr”, one of the two operations that he used to solve quadratic equations. Further advances throughout the 17th century by mathematicians and philosophers such as Gottfried Wilhelm Leibniz, Thomas Hobbes, René Descartes built on these foundations, aiming to make thought as systematic as algebra or geometry.

While there were many other mathematical advances in the following centuries that contributed to modern day artificial intelligence, the 19th century English mathematician Ada Lovelace stands out for her creative approaches and groundbreaking work in computing. She was the first to suggest that Charles Babbage’s mechanical general-purpose computer, the Analytical Engine, might have capabilities beyond calculation, and then went on to create its first algorithm, earning her the title of the world’s first computer programmer.

The birth of artificial intelligence

Although we saw advances in computing throughout the early 20th century, artificial intelligence really took off in the 1950s, with a conference at Dartmouth College in 1956 asserting that all learning and intelligence could be described precisely enough so as to be simulated by a machine. It was here that the term “artificial intelligence” was first coined, referring to “the simulation of human intelligence by machines”. In reflecting on the Dartmouth workshop 50 years later, one of the organizers John McCarthy mused: “I would have thought that the workshop would have been known for the results that it produced. It, in fact, did become known to a significant extent simply because it popularized the term ‘artificial intelligence’.”

The other major AI milestone from the 50s that you may be familiar with is the famed “Turing Test”. Popularized by Benedict Cumberbatch’s performance in The Imitation Game, the British computer scientist Alan Turing suggested that if a machine could carry out a conversation that was indistinguishable from a conversation with a human, then a “thinking machine” was plausible. In other words, a computer would be intelligent only if it could fool a human into thinking that it was human.

What followed from the mid-fifties throughout the early 70s were referred to as AI’s “golden years”, with huge advances in computing, and increases in both enthusiasm and government funding. Specifically, Marvin Minsky kept the momentum going from the Dartmouth workshop when he co-founded the Massachusetts Institute of Technology’s AI laboratory in 1959 and continued to lead the field throughout the 60s and 70s. Gaming also began to reveal itself as an ideal means of developing and testing computer intelligence, with IBM developing a program that could play checkers in 1951. In the 60s, the “nearest neighbour” algorithm was created in an attempt to solve the “travelling salesman problem”: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?” The resulting algorithm formed the beginnings of basic pattern recognition.

In 1969, however, Marvin Minsky and Seymour Papert published Perceptrons, a book discussing some of the limitations of existing neural network technology, and was perhaps a harbinger of the “AI winter” in the following years.

AI winters in the 70s and 80s

With such a successful run from the 50s through the 70s, fuelled not only by scientific advances but also elevated public expectations nurtured by science fiction such as Stanley Kubrick’s 2001 A Space Odyssey, or Isaac Asimov’s I, Robot, a collision course with limitations of AI was inevitable.

Essentially, when the computers couldn’t live up to everyone’s unrealistically high expectations, funding and enthusiasm dried up, leading to the dismantling of AI laboratories around the world. Although there was a brief second wind from 1980 to 1987 with a large investment from Japan, this boom was short-lived and bookended by another AI winter from 1987 to 1993.

Roger Schank and Marvin Minsky, leading AI researchers who had survived the first winter of the 1970s, warned the business community that “enthusiasm for AI had spiralled out of control in the ’80s and that disappointment would certainly follow.” These peaks and valleys in AI enthusiasm continue today. Although there have been a few unpopular uses of AI in recent years, such as US army’s use of AI to identify friendly or enemy tanks, or more recently Microsoft’s Tay chatbot, which rapidly exhibited racist and anti-semitic behaviours on Twitter last year, generally speaking, you could say that today we’re at an all-time high in terms of AI advances, funding and enthusiasm.

AI Landscape Today — Why so hot?

A popular tool to measure tech hype is Gartner’s Hype Cycle, which this year features deep learning and machine learning at its peak. While it is often considered more an indicator of media coverage than scientific research, there are some legitimately exciting advances that have led to AI’s current popularity. So, is it all, in fact, hype? Not quite. Let’s examine some major AI milestones from the last six years or so that have contributed to our current obsession.

Recent AI Milestones

  • 2011: Apple’s Siri is introduced, using somewhat natural language to answer questions, make recommendations and perform simple actions, or failing that, look things up on the internet for you.
  • 2012: Convolutional Neural Networks (CNNs for short) destroy the competition at ImageNet classification — a.k.a. the “annual Olympics of computer vision” — creating a furor in the community, and unleashing a huge resurgence of interest in deep learning.
  • Google trains a neural network to successfully recognize cats in YouTube videos using a deep learning algorithm, despite being fed no information on distinguishing cute cat features.
  • 2013: NEIL, the amusingly named Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images, aiming to learn the oh-so-desirable-yet-elusive human ability of common sense.
  • 2015: Facebook starts rolling out DeepFace, a deep learning facial recognition system that was trained on four million images uploaded by Facebook users. It can identify faces with 97.35% accuracy, an improvement of more than 27% over previous systems.
  • 2015: Deep Q Networks by DeepMind learn to play Atari games, marking the coming-of-age of deep reinforcement learning.
  • 2015–17: Google DeepMind’s AlphaGo defeats Go champions Fan Hui, Lee Sedol and Ke Jie, the world’s №1 ranked player at the time.
  • 2015: Google DeepDream makes everyone wonder if machines can make art, generating trippy images using a convolutional neural network, software designed to detect faces and other patterns in images with the aim of automatically classifying images.
  • 2015-present: Artist Ross Goodwin explores new forms of narrated reality using machine learning with his poetic “automatic photo” narrator Word Camera and programmed the self-titled AI “Benjamin” to write a script for a movie starring David Hasselhoff.
  • 2015-present: A range of AI personal assistants are introduced to the home, with Apple’s Siri now battling it out with Microsoft’s Cortana, Amazon’s Alexa and Google Now for your attention.
  • 2017: Libratus, designed by Carnegie Mellon professor Tuomas Sandholm and his grad student Noam Brown won against four top players at the complex version of poker — Texas Hold‘em.
  • 2017: Google’s Deepmind and the creators of the multiplayer space-war video game StarCraft II have released the tools to let AI researchers create bots capable of competing against humans. The bots haven’t won yet, and aren’t expected to for a while, but when they do, it will be a much bigger achievement than winning at Go.

Advances in Machine Learning & Deep Learning

Where the AI practitioners live

All of these milestones would not have been possible without major advances in the most exciting areas of artificial intelligence in the last decade: machine learning and deep learning. Although these terms sound similar, they’re not quite the same. Let’s clarify.

Starting in the late 90s and early 2000s, increased computer storage and processing capabilities meant that AI systems could finally hold enough data and harness enough power to tackle more complex processes. At the same time, the explosion in internet use and connectivity created an ever-increasing amount of data, such as images, text, maps or transaction information that can be used to train machines.

Instead of the former programmatic system of “if-then” rules and complicated symbolic logic procedures that require thousands of lines of code to guide basic decision-making as in Good Old-Fashioned Artificial Intelligence, or GOFAI , machine learning works backwards. Using huge datasets, algorithms learn iteratively, looking for patterns to make sense of future inputs. Machine learning was nicely summed up by machine learning pioneer Arthur Samuel, who way back in 1959 described it as the “field of study that gives computers the ability to learn without being explicitly programmed.” Machine learning is being used to address a broad range of issues today, such as identifying cancer cells, predicting what movie you might want to watch next, understanding all sorts of spoken language or determining the market value of your house.

Which are the cancerous cells in this image? An AI might be able to find out faster than a doctor. Image: Gabriel Caponetti in Popular Science.

Recent advances in machine learning have largely been due to the growth of deep learning — a subfield of machine learning. Deep learning borrows from the structure of the brain, by linking lots of simple “neuron” like structures together to do interesting things in a neural network. By stacking many layers of these artificial neurons together (hence “deep”), the network as a whole can learn to do complex tasks. Interestingly, neurons in these layers often end up performing specific roles, such as recognizing edges, or the outline of a specific object. The unique strength of deep learning is that these sub-tasks — often known as “features” — are learned directly from the data, rather than being specified by programmers. This allows deep learning to tackle problems where the solutions aren’t obvious to humans.

Let’s take a real-life example: recognizing cancer cells. A classic AI approach would rely upon a human expert trying to distill their own decision-making process and then codify it in the algorithm. For instance, we might flag cells that are greater than a certain size, or have a fuzzy outline, or a peculiar shape. With deep learning, however, we can directly feed images of cells labeled to indicate whether they’re cancerous or not, and our neural network will learn to pick out the most useful features of the image for this particular task. This is a classic example of “supervised learning”: we provide some inputs and some desired outputs, and the algorithm learns to map from one to the other.

We can also do away with the labels entirely, and ask the algorithm to group the cells that have something in common.This process is known as clustering, and it’s a type of unsupervised learning. Here we’re not providing supervision in the form of labels, we’re simply using deep learning to find structure in the data. In our example, perhaps our cells are lots of different types — skin cells, liver cells, and muscle cells — and it’d be useful to cluster these before trying to figure out which cells in each cluster are cancerous. Other common applications for clustering include identifying different faces in your photos, understanding different types of customers and collating news stories about the same topic.

Don’t believe the hype: AI Myths vs. Realities

So with all of these rapid advances in AI in recent years, you’d think we’d all be pumped about it right? Well, not everyone. As in the first golden years of AI in the 50s and 60s, there is still a wide gap between our expectations of AI based on depictions in science-fiction and the media, and what AI is actually capable of today. (Not to mention the rampant fear of disruption, privacy concerns or job loss associated with these predictions.)

Another way to frame this discussion is the difference between “narrow” and “general” artificial intelligence. Much of AI’s biggest successes so far have been in “narrow” artificial intelligence, i.e. accomplishing a specific task within strict parameters, such as Siri typing a dictated text message for you, or recognizing a cat in an image. There is no notion of self-awareness or general problem-solving skills in narrow AI. Conversely, much of what has captured the public’s imagination over decades has been this fantasy of “general artificial intelligence” in the form of a human-like assistant, akin to Hal 9000, R2D2 or Samantha in Her, where the AI has equal, if not greater intelligence than humans.

To be very clear, we’re a long way away from anything resembling general AI. Yoshua Bengio, one of Element AI’s founders, is explicit when speaking on this topic — he doesn’t believe it’s reasonable to make a time-based prediction of when this might happen. In a recent talk, he outlined a few specific reasons why we’re not there yet, the first being that all industrial AI successes to date have been based purely on supervised learning. Our learning systems are still quite simple-minded, in that they rely on superficial clues in data that don’t do well outside of training contexts.

Google’s neural net-generated dumbbells, complete with phantom limbs. Image: Google.

For example, when Google trained a neural network to generate images of dumbbells based on thousands of pictures, it got it almost right. Sure, we have two weights connected by a bar, but what are those phantom arms doing in there? Although the neural network was able to successfully identify the common visual properties of dumbbells, since the source images always featured humans holding dumbbells, it also assumed dumbbells have arms.

Despite such significant limitations, to hear Elon Musk spar with Mark Zuckerberg this past summer, you’d think an AI-fueled World War III was around the corner. Our CEO Jean-François Gagné brings us back to basics about the current state of AI in a recent blog post:

“AI is very narrow, and fragile. It doesn’t function well outside of the scope it’s set up for. It can only manage simple objective functions; so, it really is us, the humans, using our human intelligence to apply it effectively to the point where a job may be automated.”

AI’s many definitions

Now that we’re up to speed on the historical developments and recent progress in AI, let’s dig into the many definitions that we have come up with to describe it over the years. While some have argued that the term is so overused lately that it has become meaningless, we aren’t quite willing to give up on it.

How the term “AI” is used today

To define AI, let’s start by examining intelligence. On the one hand, you could take a simplistic notion of intellect, based on an IQ score for example. But we all know that intelligence is in fact much more layered and complex. The Oxford Dictionary defines it as: “the ability to acquire and apply knowledge and skills”, while the Cambridge Dictionary’s approach is a little different: “the ability to learn, understand, and make judgments or have opinions that are based on reason.” Others have developed more nuanced ways of measuring intelligence over the years, such as Howard Gardner’s theory of multiple intelligences, featuring modalities such as musical-rhythmic and harmonic, visual-spatial, verbal-linguistic, logical-mathematical, bodily-kinesthetic, and existential, amongst others. Our take on it is closer to this last definition, allowing for the acquisition, processing and applying of information within a broad range of contexts.

Our idea of intelligence is also very anthropomorphic: it’s based on the way that we, as humans, think about and solve problems. AI is widely understood in the same way, in that an artificially intelligent system comes to conclusions in a way that resembles a human’s approach. Building on this idea, David C. Parkes and Michael P. Wellman present the notion of AI as “homo economicus, the mythical perfectly rational agent of neoclassical economics.” But while it’s tempting to think that we could conceive a perfectly rational entity, the data used to train AI is often inherently flawed, due to human or other bias, which makes “perfect rationality” nearly impossible to evaluate.

A 2016 White House Report on AI sums of the challenges in coming up with a cohesive definition: “There is no single definition of AI that is universally accepted by practitioners. Some define AI loosely as a computerized system that exhibits behaviour that is commonly thought of as requiring intelligence. Others define AI as a system capable of rationally solving complex problems or taking appropriate actions to achieve its goals in whatever real-world circumstances it encounters.” It’s interesting to note that they don’t use the term “human behaviour” here, but simply “behaviour”.

Swedish philosopher Nick Bostrom focuses on the notion of learning and adaptation in AI in his book Superintelligence: Paths, Dangers, Strategies: “A capacity to learn would be an integral feature of the core design of a system intended to attain general intelligence… The same holds for the ability to deal with uncertainty and probabilistic information.” Others, such as the Computer Engineering Professor Ethem Alpaydın in an Introduction to Machine Learning, state that “an intelligent system should be able to adapt to its environment; it should learn not to repeat its mistakes but to repeat its successes.”

Our definitions

In addition to examining how others define AI today, part of our research also involved sending out a company-wide survey asking our colleagues to define artificial intelligence, in a sentence (or two, or three). In survey results, three main answer categories emerged:

  1. AI is a computer’s ability to make decisions or to predict, based on data available to it.
  2. AI is a computer’s ability to replicate higher-order brain functions such as perception, cognition, control, planning, or strategy.
  3. AI is a program created by data and computation, i.e. not hard-coded.

For our purposes today, are these definitions enough? What are some of the pitfalls in attempting to define such a broad and ever-evolving concept?

Why is this so difficult?

The “catchall” phenomenon is one of the major challenges when we talk about AI. Frequent uses of the term have resulted in a broad range of applications, and inherent confusion, as explained by Genevieve Bell, PhD at Stanford in Anthropology and Director, Interaction and Experience Research at Intel:

“For me, artificial intelligence is a catchall term and it’s one that’s cycled in and out of popularity. It’s back at the moment. It’s an umbrella term under which you can talk about cognitive computing, machine learning and deep learning, and algorithms. It’s a catchall because it means everything and nothing at the same time. It’s a cultural category as much as a technical one.”

The term is often used in wrong circumstances (or rather imprecise circumstances) because it’s so broad, as outlined in this 2017 McKinsey Global Institute discussion paper, AI: The next digital frontier:

“…It’s hard to pin down because people mix and match different technologies to create solutions for individual problems. Sometimes these are treated as independent technologies, sometimes as sub-groups of other tech and sometimes as applications… Some frameworks group AI technologies by basic functionality…, some group them by business applications…”

Another major challenge in defining AI is the fact that science and its applications are constantly evolving. As Pamela McCorduck explains in her book Machines Who Think, often an intelligent system solving a new problem is discounted as “just computation” or “not real intelligence”. Philosopher Bostrom sums this up nicely: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.” For example, the IBM program that played checkers in 1951 might have been considered groundbreaking AI at the time, but would be described as basic computing today. Or more recently, some would argue that pessimistically that there is nothing “intelligent” about any “narrow AI”, such as AlphaGo beating Lee Sedol.

Considering all of these challenges, is there a way to reduce the cultural and media noise clouding our judgment and focus on tangible issues? When we use the word “AI”, we’re usually referring to a specific technology, such as natural language processing, machine learning, or machine vision. So being as specific as possible is a good place to start. In other circumstances, however, using the term “AI” is not misplaced, such as in situations where we really don’t know precisely which technology is in use. It’s a trap that we’re not immune to falling into, along with all AI practitioners and journalists fueling this ongoing discussion.

Looking ahead

In attempting to clearly articulate what AI “is”, we have discovered that it means quite a few different things to different people. It’s an idea that has captured our imagination for a very long time. Even if we narrow it down to computer science, it’s still very broad. With this in mind, we think it’s important to focus on how AI is already changing our lives, the breakthroughs today that are sparking this hype. Kevin Kelly summed this up nicely in a recent TED talk:

“There are no AI experts right now. There’s a lot of money going to it, there are billions of dollars being spent on it; it’s a huge business, but there are no experts, compared to what we’ll know 20 years from now. So we are just at the beginning of the beginning, we’re in the first hour of all this… The most popular AI product in 20 years from now, that everybody uses, has not been invented yet. That means that you’re not late.”

In other words, it’s normal that our notions of AI involve multiple viewpoints and sometimes contradictory ideas because it’s evolving and happening now. This isn’t meant to be read as a cop-out, but rather a call to embrace its inherent bigness and messiness as we work on making it better.

All this to say, we’re not going to establish THE definition. We do, however, want designers grappling with the technology coming into production today to have a basic understanding of AI and its capabilities. If “AI is whatever hasn’t been done yet”, as Tesler’s Theorem puts it, then this is precisely where we need to be looking — not at what’s been done already, but at what is possible, or very soon to be.

We believe that at its core, AI is an immense learning opportunity, and if developed mindfully, can propel humans towards wide-sweeping advances. As horse-drawn ploughs dramatically revolutionized agriculture in the 1100s, and steam-engines propelled manufacturing and transportation into a new era in the 18th century, we see AI underpinning the next century of digital innovation. As MIT Physics Professor Max Tegmark recently stated, now isn’t the time to ponder the future as some predestined event that we’re inevitably hurtling towards, but rather, we should be asking ourselves, “What kind of future do we want to design with AI?”

If you enjoyed this, look out for the next chapter in our AI-First Design Foundations series — What is Design, Really?

Authors & Contributors

Rebecca West is Editor of the AI1D Journal at Element AI and a writer with a focus on projects at the intersection of design, technology and creativity.

Illustrations by Dona Solo, a Visual Designer at Element AI.

With contributions from Experience Designer Masha Krol, Applied Research Scientist Archy de Berker and our summer 2017 research intern Louis-Félix La Roche-Morin.

--

--

Rebecca West
AI-First Design

Writer with a focus on projects at the intersection of tech, design and creativity. Founder of the editorial meetup https://collectedfictions.club/