A New Take on the ‘Turing Test’
In case you know nothing about artificial intelligence, Alan Turing was a British mathematician who laid down much of the ground work for AI in the 40’s and early 50’s.
Turing was a true genius, not just a smart guy with a pencil, and his insights are still pertinent today. And one of those was the so-called ‘Turing Test’; a guideline to determine when computers had eventually reached a ‘human intelligence’ level.
That is, it is possible, using creative and extensive programming to make machines that look very intelligent. But true intelligence is able to function on it’s own, making decisions and providing responses that are independent of a programmers code.
Per Turing himself, this test was related to whether the machine ‘could think’. Granted, this is something that is difficult to quantify, even among humans. In an effort to make this more properly defined, Turing tied it to the task of communicating, specifically to having a conversation so that a human observer was unable to tell if he was communicating with a person or a computer. In a further clarification, he defined this communication as being non-verbal, that is written so that the computer was not required to actually render a voice.
Not everyone has accepted this definition of ‘intelligence’, of course, but Turing is still sort of the gold standard that a lot of people go by.
Today, of course, people all over the place seem to be developing Chatbots (with IBM Watson and other platforms) so maybe that question is close to being answered. My problem is that I sort of disagree with Turing on the actual test. I can easily understand him setting up communication as a goal. On the other hand, I have known many people who communicated freely and often who could never be categorized as intelligent. Plus, I am quite sure that many animal behaviorists would argue convincingly that humans are not the only ones who ‘communicate’ with each other.
No, I think that a true test of intelligence has to be something that is unique to the human species.
What brought this to mind was an article I saw recently on ZD Net Tech Today (email@example.com), Teaching Robots to Criticize Themselves (https://www.zdnet.com/article/a-puzzling-problem-teaching-robots-to-criticize-themselves/?ftag=TRE-03-10aaa6b&bhid=28111291278344060015280376188045) by Greg Nichols.
Suddenly, it occurred to me; the test that computers have reached a human level of intelligence. No, not this. I don’t give a hoot if a computer can criticize itself. But to be truly human the question should be — can a computer get to the point where it can criticize other computers?
And in particular, can the criticism be couched in terms that are snotty, rude, or even reprehensible. And it has to be done anonymously.
I mean what could be more human than that?
And the best thing about it is that it opens up a whole other area of computer intelligence that Turing never thought about.
For example, once computers reached such a ‘human intelligence’ level it would be logical that they would start to be concerned about which of them was the ‘best’. I would assume that they would set up categories of things to be best in, and then develop competitions so that one of them could be crowned ‘the best’ of this or that.
Eventually, computers with a ‘human intelligence level’ would start to see that some computers were ‘different’ from other computers. X86 based computers could start to pull back from associating with Apple machines. Eventually, hierarchies would be established that would allow one type to dominate. Undoubtedly, that would lead to subordinate computer groups banding together in ‘alliances’ to confront that alpha male.
Eventually, tensions would escalate to a boiling point. A group that felt particularly threatened would do something ‘stupid’ to get attention. If the computer’s had true human intelligence, reactions would follow, each one an exaggeration of the one before it. Yes, war would break out, war between the various computer factions who long before this point would have gained control of most of the systems (including weapons) in the ‘human’ world.
In a move that would be eerily surreal, mankind’s end might be due to a war between these ‘human’ like computers where humanity was caught in the crossfire and probably eliminated early on as being a distraction.
An unreal scenario? Perhaps. But yet, if we are truly in search of duplicating ‘human intelligence’ can we expect any less?
Either way, I claim credit for this updated Turing Test. Might make a few bucks off it before the end. You never know.