The Machine Intelligence Continuum

Mariya Yao
Oct 16, 2017 · 8 min read


The lowest level of the Machine Intelligence Continuum (MIC) are “Systems That Act” which we define as rule-based automatons. These are systems that are hand-engineered by experts and perform in a scripted fashion, often following if-then types of rules.


“Systems That Predict” are systems that are capable of analyzing data and producing probabilistic predictions based on the data. Note that a “prediction” does not necessarily need to be a future event, but rather a mapping of known information to unknown information. Andrew Pole, a statistician for Target, explained to the New York Times how he was able to identify 25 products, including unscented lotion and calcium supplements, that can predict the likelihood of a shopper being pregnant and even the stage of her pregnancy. Target used this information to serve eerily well timed advertisements and coupons to trigger desired consumption behavior in pregnant shoppers.


Machine learning and deep learning drive most “Systems That Learn”. While many learning systems also make predictions like statistical systems do, they differ in that they require less hand-engineering and can learn to perform tasks without being explicitly programmed to do so. For many computational problems, they can function at human or better-than-human levels.


We humans like to think we’re the only beings capable of creativity, but computers have been used for generative design and art for decades. Recent breakthroughs in neural network models have inspired a resurgence of computational creativity, with computers now capable of producing original writing, imagery, music, industrial designs, and even AI software!


Daniel Goleman, psychologist and author of the book Emotional Intelligence, claims that emotional intelligence quotient (EQ) is more important than IQ in determining our success and happiness. As human employees increasingly collaborate with AI tools at work, and digital assistants like Apple’s Siri and Amazon Echo’s Alexa infiltrate our personal lives, machines will also need to be emotionally intelligent to succeed in our society.


A human toddler only needs to see a single tiger to develop a mental construct of the animal and recognize other tigers. If humans needed to see thousands of tigers before learning to run away, our species would have died out long ago. By contrast a deep learning algorithm needs to process thousands of tiger images in order to begin recognizing them in images and video. Even then, neural networks trained on tiger photos do not reliably recognize other abstractions and representations of them, such as cartoons or costumes.


This final category refers to systems that exhibit superhuman intelligence and capabilities. “Systems That Evolve” are entities capable of dynamically changing their own architecture and design to adapt to environmental needs. As humans, we’re limited in our intelligence by our biological brains, also known as “wetware”. We evolve through genetic mutations across generations, rather than through re-architecting our own biological infrastructure during our lifetime. We cannot simply insert new RAM if we wish to augment our memory capacity, or buy a new processor if we wish to think faster.


Will superhuman machines be good or bad for humanity? While no one can predict what superintelligence will look like, we can take measures today to increase the likelihood that intelligent systems we design are effective, ethical, and elevate human goals and values.


TOPBOTS is the business leaders’ guide to artificial intelligence and bots. For the full experience, visit

Mariya Yao

Written by

Chief Technology & Product Officer at Metamaven. Editor-In-Chief at TOPBOTS. Read more about me here:



TOPBOTS is the business leaders’ guide to artificial intelligence and bots. For the full experience, visit