A Machine with a Soul

Does passing the Turing test bestow a computer with consciousness? And what would we do if it does?

Good Bad Science
The Good The Bad and The Science

--

On an otherwise unremarkable Saturday in June 2014, a group of computer scientists, public figures, and celebrities gathered at London’s Royal Society. They were all there for one reason — to engage in a text-based chat game to determine if a computer could pass the “iconic” Turing test.

A few hours later, the results were in. Professor Warwick of Reading University announced that a chatbot had successfully tricked 33% of the judges into thinking it was a real boy, and had therefore become the first computer to have passed the Turing test:

It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting. — Prof. Kevin Warwick

Within hours, breathless tweets, likes and pins swept across the internet, announcing this amazing result to the world, or at least across the subculture that apparently really f***ing loves science, but doesn't seem to have much time or inclination toward actual critical analysis. A day or so later came the rebuttals and debunkings from the more inquisitive corners of the online universe. So what really happened, and what does a machine passing a Turing test mean for society?

In 1950, mathematical pioneer and legendary code-breaker Alan Turing published a paper entitled “Computing Machinery and Intelligence”. Turing proposed to address how it could be possible to answer the question “Can machines think?” — that is, how would we test whether a super-intelligent computer could independently create and express its own original ideas in a manner equivalent to humans.

In this paper, Turing proposed a test that he called “The Imitation Game” : in this game, two subjects — a man and a woman — take part in a text-based conversation with an interrogator. The goal for the interrogator is to determine which of the subjects is the man, and which is the woman. In Turing’s experiment, the Man’s goal is to deceive the interrogator, but the Woman’s is to help them. Turing reasoned that, for the Man to successfully imitate a woman requires abstract and creative thought, a knowledge of social norms and rules, and an ability to integrate complex imaginary geographical and social histories into his answers.

Turing then proposes to replace the male subject with a computer — would a machine be able to beat the interrogator as often as the human male was able to? If so, then it could be concluded that the machine had achieved a level of original “thinking” comparable to a human.

At the Royal Society, “Eugene Goostman’’, a 13-year old boy from Ukraine had been busy. As a participant in the test, he was engaged in 30 different five-minute text conversations with the celebrities and scientists that made up the panel of event judges. At the end of the day, 11 out of the 30 judges who talked to Eugene thought he was a real young man from Eastern Europe. The majority 19 interviewers who realized that Eugene was, in fact, a very well-programmed computer were probably alerted to the fact by a series of evasive answers, awkward linguistic phrasing, and very limited social knowledge displayed by Eugene.

Nevertheless, the organizers declared that Eugene had passed the Turing test by deceiving more than 1 in 3 of the judges — a hypothetical threshold Turing had proposed in his thought experiment (but nowhere close to the 95% confidence level that most scientific studies are measured by). Turing’s experiment had also required the interrogator determining which of two subjects — one human, one a computer — was human; at the Royal Society, Eugene Goostman was interrogated in isolation, with the judges making a yes/no determination as to whether he was human or not. Even if Turing’s 1/3 success rate could be validated, it would not be applicable to this experiment because it recorded a fundamentally different output from the judges.

Ultimately, Eugene Goostman was able to beat the Turing test because Eugene Goostman was specifically designed to beat the Turing test. The persona of a teenage subject, a non-native English speaker, gave cover to its limitations in conducting a live conversation; enough judges were tricked into ignoring the machine’s shortcomings by the illusion that it was a linguistically-challenged young person with limited social skills and knowledge.

Turing proposed this experiment as a way to test whether a machine could think for itself. The purpose of the Turing test is to assess creative, integrated strategic thinking that is the hallmark of human consciousness. By gaming the way that the test was assessed, reducing it to textual trickery, the experiment at the Royal Society was a superb demonstration of programming finesse, but did not shed any new light on the feasibility of genuine artificial consciousness.

What if a computer did pass a true Turing test, as originally envisioned by Alan Turing ? Would the ability of a computer to successfully imitate a human such that another human can not tell them apart be a true test of “thinking”, or just a really awesome, high-tech parlor trick?

Turing’s understanding of mathematics, coding, and the possibilities for artificial intelligence were far ahead of his time. However, his understanding of the human mind and cognition were firmly rooted in the age he lived in. After all, this was a man who struggled deeply with depression, in large part due to the prevailing “wisdom” of his time that his homosexuality was a deviant behavior — eventually accepting chemical castration as a “cure”, thanks to the primitive assumption that male sexuality was as simple as a testosterone-driven desire that could only be sated through sex (and therefore cured by cutting off the flow of hormones by destroying the testes).

In Turing’s time of the early 1950s, neuroscience had barely moved on from the phrenology and charlatan psychotherapists of the Victorian age. What little was understood of mental function was mostly derived from certain brain-injured patients, and our cellular understanding of neuronal processing was limited to a few electrical experiments using squid and sea slugs. Today, we understand human consciousness as part of a continuous stream of signalling that occurs in every part of the brain, all the time. While some activities such as visual processing or movement are coordinated from discrete regions of the cerebral cortex, the totality of a person’s mental existence comes from a global integration of signaling across the billions of nerve cells in the brain. While some still speculate that this or that particular section of cortex is the seat of the mind or self, it is usually thanks to a narrow definition that allows consciousness to be artificially simplified to a limited trait, such as self-awareness or integrated memory retrieval.

In many ways, the original Turing test reflects the simplistic understanding of human thought that existed in the 1950s — that imitating a human to the extent that it could successfully be mistaken for one represents confirmation that a computer “thinks” for itself. A well-controlled Turing test given to a computer [that was not specifically programmed to pass the Turing test] might turn out to be an excellent benchmark for the development of integrated, creative and original strategic calculation, but is unlikely to answer the complex questions of self-awareness, autonomy, or individuality that are central to our understanding of “consciousness”.

Conversely, a dog is unlikely to ever pass a Turing test — but most dog owners would agree that they have consciousness and agency of their own.

In many ways, the experiment at the Royal Society epitomizes our civilization’s schizophrenic relationship with science and technology. This is far from the only example of a “scientific” event that was more focused on grabbing headlines and publicity for technology, rather than a rigorous examination of its veracity. In a society that seeks heroes and superstars for worship, we elevate figures like Turing, Einstein, and Tesla to almost superhuman status — and Turing’s very real suffering at the hands of primitively-minded authorities only serves to exacerbate his status as a martyr to the cause. Our understanding of science as a deliberate, exact examination of the Universe has been supplanted by an Instagram-ready hunger for stunning imagery and witty non-sequiteurs to be deployed against unbelievers.

Our obsession with data, but not with its rigorous scientific examination, has led us to a world where it is too easy to lose sight of our real goals. As companies and governments find more effective ways to track their progress against benchmark metrics, it is all too easy to forget the real goals of a project and simply work on maximizing the metric scores — leading to schools “teaching to the test”, or hospitals minimizing [reported] waiting times at the expense of delivering effective care, among many other sad examples. Similarly, by creating a machine designed to “win” at the Turing test through misdirection and obfuscation, we lose sight of the diligent work of thousands of people who seek to understand how to make machines smarter, more responsive, and better able to respond to people’s needs.

As artificial intelligence becomes more and more a part of our lives, it is quite conceivable that we will someday need an effective Turing test to tell humans apart from machines. If computers were to develop real consciousness, self-awareness, and agency, then we would have to answer serious questions about our relationship with them. Does real consciousness bestow rights, such as the right to live? If so, could you then “murder” a computer? What consists the essence of the conscious computer — the hardware or the software? Would synthetic beings be treated as a special class, akin to apes or whales — not equal with humans, but legally protected from unnecessary killing and experimental treatments? In this scenario, would a “robot rights” movement emerge to challenge their second-class treatment by humans?

The world is changing rapidly. Disparate societies are more interconnected than ever, and systems are ever more autonomous. As our machinery develops higher levels of consciousness, our relationship with it changes. In order to remain masters of our own destiny, humans must ask the hard questions about our relationships with technology — and ask them scientifically — in order to stay ahead of ourselves. To do otherwise is to surrender to assumption, superstition, and decline.

--

--