AI’s Next Superpower: Self-Knowledge

Gary Blauer
Minds Abound
Published in
3 min readFeb 24, 2018

When an AI does something better than humans can but is not perfect, what does it need to do to be truly useful? Know its limitations.

When an AI with a track record of accuracy solves a complicated problem but humans don’t know how much to trust it, what does it need to do to assure? Explain its decision.

When the amount or usefulness of available information vary, what should an AI do to make this clear? Express confidence levels in its conclusions.

These are seen as problems; obstacles to be overcome before we can turn the systems on and and stop thinking about them. In the big picture, this is wrong.

Dealing with these challenges will push AI to new levels of sophistication and competence. And the relationship between humans and machines will change substantially.

Serious efforts are underway. Self-driving cars need, and will continue to need, occasional help. Major developers (including GM, Waymo, and Nissan) are replacing steering wheels and brakes with tele-operation centers which can take over control when necessary. To make these centers work, developers are assessing the “disengagements” of their test vehicles (human takeover events) to be able to predict them and handle them remotely. Reliable identification of unmanageable circumstances is now the key to putting these vehicles into general use.

DARPA is funding a dozen separate programs under its Explainable Artificial Intelligence initiative. Some of these identify critical decision factors or archetypical examples on the fly. Others use “bolt-on” or higher level systems to assess the base network in order to understand its decisions.

Nvidia has demonstrated a deep learning system which can pilot a car and, more interestingly, highlight the areas of input on which the system is focusing at any particular moment to make its decisions.

Some research programs determine which cues lead to automated vision decisions by repeatedly altering input in systematic ways and observing output changes. A similar approach is being tried with text-based processors. Other approaches actually examine the inner elements of networks to find the influencing factors.

These are not distant or hypothetical developments. Self-driving cars without on-board means of control are expected to be in commercial use within a year. European Union rules requiring the availability of explanations for AI decisions have just gone into force. Progress is rapid.

Taken together, these developments are not just another modest rung on the technology ladder. They will produce fundamentally different systems and human-machine relations.

AIs will have knowledge of their limitations, will be able to delineate their decision processes, and will be able to assess and communicate probabilities. This will make them more useful and, of even greater importance, more trustworthy.

We will be much more comfortable as passengers in a car that is constantly watching for circumstances it might not be able handle, rather simply doing the best it can to deal with things on its own.

We will be much more confident in a medical diagnosis, a legal determination, or a cyber security recommendation backed by detailed factor analysis rather than the opaque conclusions we get today.

We will feel reassured to be able to search for a course of action that seems, say, 90% promising rather than having to blindly accept whatever seemed best to the system.

We will also likely have quite different interactions. Imagine comfortable speech front ends to these self-assessing systems:

AIs will express hesitations, percentages, and reasons. They will be able to reconsider a position given new input and explain the new conclusion. They will be endlessly patient in these interchanges. They might even seem humble.

It is correctly noted that today’s deep learning artificial intelligence systems, which are fast, parallel, and hard to analyze, are better called artificial intuition. Instilling reason in AIs is seen as essential to anything resembling general intelligence. But reason is not a single indivisible thing and objectively assessing and reconsidering intuition is surely a form of it.

We are about to welcome into our lives artificial intelligences which are careful, accessible, understandable.

Reasonable.

--

--

Gary Blauer
Minds Abound

Intelligence and all its new forms. Former neural net researcher (long ago), coder, tech analyst, Wall St research director.