The Machine Intelligence Continuum

Mariya Yao
TOPBOTS
Published in
8 min readOct 16, 2017

This is part two of our WTF IS AI?! series. Read part one on modern AI techniques if you missed it.

If you’re not an AI researcher or engineer, understanding the subtle differences and applications of various machine learning approaches can be challenging. Business problems can usually be solved in multiple ways by different algorithms and the comparative merits of different methodologies might not be obvious without technical experience or practical experiments.

To help business executives disentangle the functional differences between different AI approaches, we’ve segmented applications along our Machine Intelligence Continuum (MIC). The MIC represents a continuum from simple, scripted automation to superhuman intelligence and highlights the functional capabilities of different levels of machine intelligence.

Although we described seven levels along the continuum, keep in mind that the distinctions between levels are not mutually exclusive and many overlaps exist.

SYSTEMS THAT ACT

The lowest level of the Machine Intelligence Continuum (MIC) are “Systems That Act” which we define as rule-based automatons. These are systems that are hand-engineered by experts and perform in a scripted fashion, often following if-then types of rules.

Examples include the fire alarm in your house and the cruise control in your car. A fire alarm contains a sensor that detects smoke levels. When smoke levels reach a certain level, the device will play an alarm sound until manually turned off. Similarly, the cruise control in your car monitors your automobile’s speed and uses a motor to vary throttle position to maintain a constant speed.

You would never set your cruise control, take your hands off the wheel, and claim you have a self-driving car. That would result in very negative outcomes for you. Yet most companies claiming to have “AI” are really just using Systems That Act, or rule-based mechanisms that are incapable of dynamic actions or decisions.

SYSTEMS THAT PREDICT

“Systems That Predict” are systems that are capable of analyzing data and producing probabilistic predictions based on the data. Note that a “prediction” does not necessarily need to be a future event, but rather a mapping of known information to unknown information. Andrew Pole, a statistician for Target, explained to the New York Times how he was able to identify 25 products, including unscented lotion and calcium supplements, that can predict the likelihood of a shopper being pregnant and even the stage of her pregnancy. Target used this information to serve eerily well timed advertisements and coupons to trigger desired consumption behavior in pregnant shoppers.

Automated and computational statistics underlie most “Systems That Predict”, but predictions are only as good as the incoming data. If your data is flawed, or you choose a sample set to analyze that does not represent your target population as a whole, you will get erroneous results. The US 2016 Election polls are a painful reminder of how lack of data integrity and methodological mistakes are extremely common in statistical analyses and often lead executives to the wrong conclusions.

SYSTEMS THAT LEARN

Machine learning and deep learning drive most “Systems That Learn”. While many learning systems also make predictions like statistical systems do, they differ in that they require less hand-engineering and can learn to perform tasks without being explicitly programmed to do so. For many computational problems, they can function at human or better-than-human levels.

Learning can be automated at different levels of abstraction and for different components of a task. Completing a task requires first acquiring data which is used to generate a prediction about the world. This prediction is combined with higher level judgement and an action to produce a result. Feedback and measurements from the outcome can be fed back to earlier decision points to improve the task performance.

Many enterprise applications of statistics and machine learning focus on improving the process of turning data into predictions. In sales, for example, machine learning approaches to lead scoring can perform better than rule-based or statistical methods. Once the machine has produced a prediction of how good a lead is, the salesperson then applied human judgement to take follow up action.

More complex systems, such as self-driving cars and industrial robotics, handle the entire anatomy of a task. An autonomous vehicle must turn video and sensor feeds into accurate predictions of the surrounding world and take the correct action based on the environment. Some complex models can also perform online learning, which entails using real-time data to update machine learning models, versus offline learning, which involves training models on static, pre-existing data.

SYSTEMS THAT CREATE

We humans like to think we’re the only beings capable of creativity, but computers have been used for generative design and art for decades. Recent breakthroughs in neural network models have inspired a resurgence of computational creativity, with computers now capable of producing original writing, imagery, music, industrial designs, and even AI software!

Berlin-based engineer Samim trained a neural network on 14 million lines of passages from romance novels and asked the model to generate stories about images. Flow Machines, a vision of Sony, used AI trained on Beatles songs to generate their own hit, Daddy’s Car, which eerily resembles the musical style of the hit British boy band. They did the same with Bach music and were able to fool human evaluators who often couldn’t differentiate between real Bach and AI-generated Bach imitations.

Autodesk, the leading producer of CAD software for industrial design, released Dreamcatcher, a program that generates thousands of possible design permutations based on initial constraints set by engineers. Dreamcatcher has produced bizarre yet highly effective designs that challenge traditional manufacturing assumptions and exceed what human designers can manually ideate.

AI is even outperforming some artists economically! Google’s DeepDream hosted an exhibition and auction of AI-generated art that collectively sold for $97,605.

SYSTEMS THAT RELATE

Daniel Goleman, psychologist and author of the book Emotional Intelligence, claims that emotional intelligence quotient (EQ) is more important than IQ in determining our success and happiness. As human employees increasingly collaborate with AI tools at work, and digital assistants like Apple’s Siri and Amazon Echo’s Alexa infiltrate our personal lives, machines will also need to be emotionally intelligent to succeed in our society.

Sentiment analysis, also known as opinion mining or emotion AI, extracts and quantifies emotional states from our text, voice, facial expressions, and body language. Knowing a user’s affective state enables computers to respond empathically and dynamically, as the best humans we know often do. The applications to digital assistants are obvious, and companies like Amazon are already prioritizing emotional recognition for the Echo.

Emotional awareness can also improve interpersonal business functions such as sales, marketing, and communications. Rana el Kaliouby, founder of Affectiva, a leading emotion AI company, helps advertisers improve the effectiveness of brand content by assessing and adapting to consumer reactions. Mental and behavioral health is also an area ripe for innovation. Affectiva originated from academic research at MIT designed to help autistic patients improve recognition of social and emotional cues.

SYSTEMS THAT MASTER

A human toddler only needs to see a single tiger to develop a mental construct of the animal and recognize other tigers. If humans needed to see thousands of tigers before learning to run away, our species would have died out long ago. By contrast a deep learning algorithm needs to process thousands of tiger images in order to begin recognizing them in images and video. Even then, neural networks trained on tiger photos do not reliably recognize other abstractions and representations of them, such as cartoons or costumes.

Humans have no trouble with this, because we are “Systems That Master”. A “System That Masters” is an intelligent agent capable of constructing abstract concepts and strategic plans from sparse data. By creating modular conceptual representations of the world around us, we are able to transfer knowledge from one domain to another, a key feature of general intelligence.

As we discussed in part one of our WTF Is AI?! series, no modern AI system is an AGI, or artificial general intelligence. While humans are “Systems That Master”, current AI programs are not.

SYSTEMS THAT EVOLVE

This final category refers to systems that exhibit superhuman intelligence and capabilities. “Systems That Evolve” are entities capable of dynamically changing their own architecture and design to adapt to environmental needs. As humans, we’re limited in our intelligence by our biological brains, also known as “wetware”. We evolve through genetic mutations across generations, rather than through re-architecting our own biological infrastructure during our lifetime. We cannot simply insert new RAM if we wish to augment our memory capacity, or buy a new processor if we wish to think faster.

While we continuously search for other intelligent life, we are not aware of any “Systems That Evolve”, or superhuman intelligence. Computers are currently constrained by both hardware and software, while humans and other biological organisms are constrained by wetware. Some futurists hypothesize that we may be able to achieve superhuman intelligence by augmenting biological brains with synthesized technologies, but currently this research is more science fiction than science.

Once an upgradable intelligent agent does emerge, we will reach what many experts call the technological “singularity”, when machine intelligence surpasses human intelligence. Self-evolving agents will be capable of ever-faster iterations of self-improvements, leading to an intelligence explosion and the emergence of superintelligence.

BUILDING THE SYSTEMS OF TOMORROW

Will superhuman machines be good or bad for humanity? While no one can predict what superintelligence will look like, we can take measures today to increase the likelihood that intelligent systems we design are effective, ethical, and elevate human goals and values.

How we build today’s “Systems That Learn”, “Systems That Create”, and “Systems That Relate” will affect how we build tomorrow’s “Systems That Master” and “Systems That Evolve”. We go into a more detailed discussion of the Machine Intelligence Continuum and how to design beneficial AI systems in our executive introduction to artificial intelligence below:

Originally published at www.topbots.com on October 9, 2017.

Love what you read? Join the TOPBOTS community to get the best bot news & exclusive industry content.

--

--

Mariya Yao
TOPBOTS

Chief Technology & Product Officer at Metamaven. Editor-In-Chief at TOPBOTS. Read more about me here: mariyayao.com