End of AI Confusion: The XLabs ‘A2I’ Matrix

Radhika Dirks
XLabs.ai
Published in
7 min readFeb 17, 2018

I heard they launched that Tesla into space because it accidentally became self aware!

What is artificial intelligence? RandomStartup.ai told me they use AI, but that’s not really AI…is it? I know neural nets are AI. But I want the one with the deep learning, right? What’s deep about it any way, does it write poetry? Speaking of, why does AI play so many board games anyways? Do you think AI will take all our jobs, or use us as slave labor?

If you are anywhere near AI these are the questions coming your way. The frustrating part is some of them are actually hard to answer. Here we’ll explore where a lot of this confusion (& as a result, the hype!) comes from and then we’re going to give you a simple, easy framework — the XLab’s A2I matrix that not only makes it easy to answer these questions, but will give you an intuitive understanding of AI.

What is AI? Quick video summary of the blog.

The Confusion

What is AI? We can’t really talk about AI without talking about what is intelligence. And there we run into our first problem. The only thing people in the field agree upon is the ‘A’ part: the artificial part. That is, AI likely refers to something that is non-human, not natural life, likely inorganic and most likely built on silicon, atleast for now. What about the ‘I’ part. What is intelligence?

Simone de Beauvoir, an existentialist philosopher, described intelligence as a way of disclosing being. NASA directs all its search for intelligent life outside of planet Earth. A popular Silicon-Valley definition is: ‘intelligence is general purpose problem solving’. Are humans just general-purpose problem solving machines? Surely we can do better. If we go back all the way to the roots of AI with Haugeland’s ‘good old fashioned AI’ (GOFAI), intelligence is slated as a serial, discreet, explicitly symbolic, rule based intentional rationality. I take offense to all of that (and yeah, good luck writing an intelligent program within just that framework). Thankfully, we moved past almost all those terms because AI, as well as computing, has mostly evolved in the wild, brazenly torpedoing expectations and rules (did someone say rule-based?).

Since its kickoff in the mid 1900s, AI as a field has come up with very different kinds of intelligences as well as programs and paradigms of computing that don’t necessarily fit in realms of traditional intelligence anymore (e.g., parallel processing , unsupervised and model-free learning). But the most popular rigorous definition of modern intelligence a la AI today unfortunately still channels some of these simplistic undertones. Legg and Hutter measure universal intelligence as ‘an agent’s ability to achieve goals in a wide range of environments’ . A very ironically narrow definition, hyper ironically intended to be non-anthropomorphic and more so, universal, and yet formalized with reinforcement learning (which might be non-anthropomorphic but universal?). Plus it is a formalism resting on extremely shaky ontological grounds (e.g., what is an agent? Why is this intelligence or agency universal?)!

So what is intelligence? Intelligence is ambiguous, moving and elusive. Every time we pen something down and formalize systems of measurements, we appear to sneak a peek into what we left out. Godel, the mathematical god of inconsistency and incompleteness, would be proud. We can’t seem to capture intelligence on paper yet and that is part of the reason AI comes off as hype. For if we can’t even describe, articulate or pin down 50 percent of the name of the field (the I part of AI), it results in an insane amount of confusion and disappointment¹ because then what is AI? Is AI a technology? A set of tools? A philosophy? Deep or machine learning? Or just a bunch of hope?

The Problem

The clarity with which we define something determines its usefulness — Tony Blauer

The problem with all this confusion is that it slows down the field drastically. Developers work at cross purpose. The person who can clearly articulate the challenge will always innovate faster than the person who has a vague notion of where they are headed. Investors, rightfully confused, invest inefficiently. Customers struggle to understand technology differentiation and value propositions. Perhaps most importantly, builders (& users!) have no clear expectations to beat; the history of AI is riddled with changing the intelligent goal post everytime the field achieves something significant (e.g., playing chess → beating humans → beating grand masters → becoming grand masters! → only playing chess isn’t intelligence). We must present a simple picture of what AI is because great technology, like great art, arises from clear thinking admist muddled being.

A2I Matrix: a new AI framework

The A2I matrix is the framework we at XLabs like to use for thinking about AI and going beyond AI. Since the day I first drew this picture, our progress has drastically increased and the range of decisions — from the meta to the method — have become clearer. The XLabs’ A2I matrix shown below has clarified the thinking of everyone we’ve shared it with from most technical to the least in less than a minute:

XLabs A2I Matrix: Left to right we map what is easy to hard for humans. Bottom to top maps the same for computers today. orizontal axis maps what is hard and easy for humans. Artificial intelligence (AI) sits on the top left quadrant. Amplified intelligence (Amp I) is on the right.

Trivial quadrant: The bottom left quadrant maps tasks that are currently considered easy for both computers and human: trivial. Example: 2+2.

Traditional Computing: The bottom right contains tasks easy for today’s computers but hard for humans to process, e.g, calculations involving large numbers or managing social networks with vast connections.

The AI Quadrant: Things begin to become interesting in the top left quadrant — the AI quadrant . These are tasks that come naturally to humans: speech, vision, driving. Individual software/code that can mimic human talents is considered AI! Because AI algorithms can currently only solve specific human level intelligence problems, such as image recognition (hey, that’s a cat!) or transcribing speech or self-driving cars, they are referred to as narrow AI. Specific narrow AI software are also required for robotics (e.g., for robots in logistics, care, assembly lines, etc). But the field of AI primarily refers to the software. The form the machine takes, i.e., hardware is secondary.

Artificial intelligence mimics human abilities

The holy grail for most traditional AI technologists is to build a general AI: a single algorithmic platform that can mimic humans in all our thinking capacity (picture a software that can handle that whole quadrant!). Artificial General Intelligence (AGI) would simply be platforms that can mimic substantially all human capacities. Cross-functional but still human. While a few efforts are underway, there are currently no viable approaches for a general AI. And more so, we have no viable theoretical framework for such a possibility. Embedding such a hypothetical general intelligence software brain into a robotic body is what will enable a Hollywood style AI.

Amped I: Since first we drew this picture it’s been the top right quadrant that excites us the most: a new area we at XLabs call Amplified Intelligence. These are things that humans currently find very difficult to do, things we haven’t evolved for and computers still cannot do very well! This quadrant covers transhuman capabilities, amplified intelligence software that can solve problems that do not come naturally to humans.

Amplified Intelligence supercedes, but more importantly, differs from human intelligence

Algorithms that are way smarter than human in specific problem areas are also and already possible. Very complex problems that are hard for humans to foresee and analyze fall under this category. These are things that require a massive amount of memory but more importantly, there’s a lot of computational complexity to them: interactions that occur simultaneously at different time scales with lags, different types of interactions and a hierarchy of complexity that, not only do we not really comprehend, but (with the help of Amped I) might not need to. Given the societal and selection pressures through which we evolved, humans aren’t necessarily designed to solve these problems. Examples of problems in amplified intelligence include drug discovery, predicting stock markets, predicting and understanding socio-economic disruptions, automating science or discovery and more. These problems tend to have in common:

  1. Sheer vast quantities of associated data
  2. Highly unobvious correlations and nonlinear dynamics.
  3. System memory
  4. Time lags
  5. Feedback.

Power of the A2I matrix

Now the reason XLab’s A2I framework for intelligence becomes really important and interesting is two fold. First, we begin to have a common scaffolding to talk about AI and AMP I. Second, we begin to realize that the morphologies we pick for the development of your algorithms matter! They depend on the task that you are trying to solve and that match between the algorithmic and the problem signatures need to make intrinsic sense. The thing to remember and what is so cool about the A2I matrix is that it is a snapshot in time. Meaning, this matrix would have looked very different 50 years ago (because what computers could do then was vastly different) and I sincerely hope that it will be very different 50 years from now. Interestingly, what humans can and cannot do has also changed over the years. We can easily see a future where humans no longer remember how to drive!

Footnotes:

  1. One of our investors describes happiness as the difference between expectations and outcomes. AI, with its historic theme of repeated exultations and frustrations is a prime example.

Photo credit: Andy Kelly & Matthias Goetzke

--

--