Can the Turing Test confirm AGI?

Peter Voss
3 min readJan 17, 2017

--

The Turing Test asks both too much, and too little — in different ways.

It asks too much by insisting that the AGI needs to hide its strengths, and fake human limitations, human-specific experience, and human quirkiness. This is quite unnecessary, and would lead to man-years of unneeded effort to give it this particular acting ability. Furthermore it would undermine trust in the machine if it was actually constructed to lie to us so effectively. When I ask an AGI if it had a puppy as a child I would want it to answer that it didn’t have a childhood because it is a machine, etc. Any AI that can learn, reason, and make discoveries like a competent PhD-level researcher should be called AGI, even if it’s unwilling to fool you.

It asks too little in the way TT competitions are run. As we have already seen with ‘first machine to win the TT’ the test protocol is quite limited and there are many ways to game the system. Even with stricter criteria, one could imagine that a purpose-built, TT-busting, narrow AI could fool many (most) judges. For example, a system that used a record of a huge number of tests may be able to anticipate how to answer to win (somewhat similar in the way that Watson uses huge databases and thousands of algorithms to win Jeopardy).

Because the TT is blind (it doesn’t care how you achieve results), and because it run as a competition and is largely a binary exercise, it attracts projects that concentrate on how best to fool the judges.

The bottom line is that anyone seriously working on AGI (or even just more intelligent AI) will not put in any effort specifically to win a TT — it’s just a waste of effort. Real intelligence is worth many times what a TT win could ever be. In this way one can actually identify TT competitors as not being serious about AGI.

Finally, there are people who claim that “we really don’t know what intelligence is”, and therefore we need the TT to tell us when “we’ve achieved it”. I would certainly not want to fund any researcher who felt that way. If you don’t have a good enough understanding of what intelligence is (means in the context of AGI), then you can hardly engineer it. I that case you’re just trying to stumble upon it by chance, or trial and error. Not a great way to go about it, to put it mildly

How else can we test progress towards AGI?

What AGI researchers (and investors) should want are both quantitative as well as qualitative measures — what is being achieved, and how is it being achieved. Therefore, we need:

a) to be able to measure incremental progress — both in the level of ability, and the scope (more below).

b) to be able see how results are achieved — i.e. do they conform to a plausible theory of intelligence — or are they, for example, just hard-coded or brute-force trained in an inflexible, non-scalable way?

We measure progress by measuring performance of Core Cognitive Abilities , which are part of our overall AGI theory/ design. I suggest that this is the right testing approach. Tests that measure specific/ narrow human-level ability provide no guarantee that skills can be generalized. In fact, any specific tests that are not part of a grand integrated AGI strategy tell us next to nothing about AGI potential.

--

--