70 years Back to the Future: Turing’s insights in the year of pandemic

Find the audio version on the “Llamazing Data” podcast (and other great topics every fortnight)

Kseniya Kalashnikova
7 min readOct 2, 2020

Aside from “great news” about COVID pandemics, global economical crisis and riots in Belarus and USA, the year 2020 has one truly nice occasion which is being overlooked to our own shame. 70 years ago the outstanding English mathematician Alan Turing introduced, in many ways, the revolutionary paper “Computing Machinery and Intelligence” which covered the topic of artificial intelligence. I would like to discourse how many of his insights found their way to the reality we live in today.

Important thing to mention is that Alan Turing never actually referred to “intelligence” in machines in the meaning that it ended up having. Instead of trying to prove whether machines can “think”, he shifted the focus onto the question of whether machines can imitate humans effectively enough, that whoever will be in contact with it, will fail to realize the absence of “flesh and blood” in their vis-à-vis. The proposed approach to that experiment is currently known as Turing Test.

But let’s take a closer look to the issue. Based on the popular in Victorian times game, where the players need to find out if the person hidden behind the curtain is a man or a woman, Turing’s “Imitation Game” is aimed at figuring out whether the hidden companion is a human or a machine. Using the communication device (teleprinter, stated in paper, is logically evolved into chats and messengers these days), the judge can be asking various questions to help one distinguish who is who. And the game begins! To make the game more exciting and less predictable, the mysterious participants are trying to confuse the judge with their answers, putting one’s reasonings to the test: human acting like a machine and the machine — acting like a human.

Source: H2S Media

If the human’s mischief can be easily spotted by giving a complex mathematical problem to solve in a short period of time, the machine’s game code might be more tricky to crack. The machine might withhold giving out the answer to the problem (solved, probably, in milliseconds time after receival) to seem like a human. Asking personal questions about childhood and memories might also not work, and the fitting answers will be considerately prepared by the machine’s developers. In the end, the machine’s game would boil down to doing its best to fool the judge.

In his paper Alan Turing supposed that in 50 years’ time (that being 2000) “it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning”. This assumption became real 12 years later than noted. In the first “Turing Test Marathon” event the judges were fooled in 20.2% cases by the program called “Eugene Goostman” which posed itself as a 13-year-old boy from Ukraine, increasing this achievement two years later with 33% misleading chances. In 2015 a new AI-child-like program named “Sonya Guseva” (now fully in Russian), fooled the judges in 47% of the cases.

If you want to check Eugene out, here’s the website: http://eugenegoostman.elasticbeanstalk.com/

And here comes the curious detail. I was reading the book by Dr. Andrey Kurpatov, a famous psychotherapist and author, a few months ago and noticed a very interesting experiment described. Imagine a monkey sitting in a cage (I’ll refer “she” to it). There are two boxes in front of her, and she saw how experimenters put a banana in one of those boxes. Then another experimenter entered the room and opened the box which the monkey had pointed at. If there was a banana inside, the experimenter gave it to the monkey as a reward. If the box was empty, then the monkey got left without a fruit. So throughout time the monkey realized that in order to get a treat, she needs to point at the right box, and she quickly learned to always pick the “banana box”.

Then the experiment got more complicated. One of the experimenters was a “good” one. If the monkey chose the “banana box”, the “good” experimenter gave her the fruit. But the “bad” experimenter acted different: once the monkey pointed at the “banana box”, the “bad guy” took the banana and ate it oneself in front of the shocked monkey. So the poor animal learned the hard way, who is good and who is bad, and further on, once the “good guy “ entered the room, she showed to the “banana box”, because she knew that this way she’ll get the reward. But once the “bad guy” entered the room, the furry trickster acted like she had no idea of what she’s asked about — only to not give away the “banana box” place. Moreover, with time the monkey turned to the Dark Side and became truly evil: once the “bad guy” entered the room, she intentionally pointed at the empty box, so that no one would get anything.

The point of this example is that the monkey might have not always understood what was happening and why those people were taking her bananas, but she quickly adjusted and learned how to trick her assessors for her own benefit. So in terms of the imitation game the machines are trying to trick their judges into thinking they are the humans. Making lies and misleading sound like very human behaviour, but does the excellence in trickery mean intelligence? That is a one Pandora box question we are not going to open (and won’t let our monkey to do so).

The Oxford Dictionary defined intelligence as “the ability to learn, understand and think in a logical way about things”. Doesn’t machines already do all those things? The ability to absorb information in a beyond-human-level capacity allows machines act “smart” and “intelligent”. Even being disconnected from the Internet, the IBM’s Watson assistant won in Jeopardy! intellectual game, based on the data it already had. So it indeed did all the things from the definition — learned the data, understood the questions, thought in a logical (read: “programmed search engine”) way and won the show.

Full story can be found here: https://www.jeopardy.com/jbuzz/news-events/watson-ibm-invitational

That example might have misled you, making it seem like only data processing is the key to intelligence. The access to the world’s archives and various datasets, fed to algorithms in order to tackle a specific problem that humans cannot solve on their own, does not still cover the notion. Look at the Deep Thought, supercomputer from “Hitchhiker’s Guide to the Galaxy”, which took seven and half a million years of calculation and analysis to find the Answer to The Ultimate Question of Life, the Universe, and Everything — and still got it as far as “42”. Even being a straight-from-fiction example, it still gives you the idea of “data storage” not being equal to “intelligence”.

The AI-based algorithms can pretty much do all those things you can imagine humans doing: writing articles, teaching and grading the tests, painting pictures, conducting music, running deliveries, perform surgical manipulations, work in labs and hazardous production facilities, you name it. Having a goal, the algorithms proceed to its completion, and “it can’t be bargained with, it can’t be reasoned with” (got the reference?). According to the dictionary, the algorithms has already met all criteria — and yet still we are not conquered by the Almighty Skynet and other. What is the trick?

Screenshot from “Doctor Who — Doomsday” episode

The key to filling the equation “AI = human” is the human element itself. Emotional side can be constructed in the algorithmic sequence of reactions to various irritants, enabling the machine act human-ishly. But would that mean that the machine IS being human? Would the built-in Tears Generator, activating upon sad moment in the movie, meaning being human? How about family bonds, affection to pets or nostalgia about passed times? Is it possible to program those kind of components? Remember what happened to Cyberman from “Doctor Who” when they regained emotions — their superior technologies burnt down in overwhelm.

Alan Turing hoped that machines will compete with humans in all purely intellectual fields, and it was up to us to think carefully which ones to develop first. I do believe that in terms of creation (of any kind) humans are way behind already and might proceed losing to the advancing intelligence of machines. But the power of humans is in their ability to be simple and emotional — not the best weapon in the Digital Era but it totally will do for the foreseen future.

--

--

Kseniya Kalashnikova

5G SW Engineer, previously Senior BI Business Analyst; Host of “Llamazing Data” podcast, living in Paris and trying to survive :)