What’s Eugene Goostman Thinking Inside the Black Box?

Charge
Charge VC
Published in
6 min readJun 30, 2020

Welcome to Charge’s Summer of Synthetic Media.

In his famous 1950 paper, Computing Machinery and Intelligence, British mathematician Alan Turing proposed a straightforward, if revolutionary, standard for artificial intelligence (AI). At the time, Turing himself noted the idea that machines might be capable of thinking could “possibly [be]…heretical” citing St. Thomas Aquinas’ argument in Summa Theologica (as quoted by Bertrand Russell) that “thinking is a function of man’s immortal soul”. To move beyond this fraught question, Turing asked whether a threshold of human imitation, assessed through a blind game, would be instead a more uniformly accepted standard of intelligence. In other words, “‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers which would do well in the imitation game?’”

This ‘imitation game’ as proposed, and ever after known as the ‘Turing Test’, was simple: A human operator would interrogate both a human being and a computer under such conditions where the interrogator would not know which was which. The communication would be conducted entirely by digital means or text message. Turing argued that if the interrogator could not distinguish between the human operator and computer with more than 70 percent accuracy after five minutes of questioning, then it would be reasonable to infer that the computer was intelligent. He also made a prediction: that “that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well” that they would be able to pass the Turing test.

You can sense Turing’s disappointment that no computer of his time could pass his test, given that he wouldn’t be alive when it would likely happen; but how did he arrive at that prediction? It wasn’t storage capacity, as Turing refutes this using a top-down approximation based on his estimate of the binary storage capacity of the neurons in the human brain, which he estimated at 10^10–10^15 (though he noted that he would be surprised “if more than 10^9 was required for satisfactory playing of the imitation game”). The memory storage of the most advanced machine seemingly available to Turing at the time, which he refers to as the ‘ manchester machine’ would [still] be a very practicable possibility even by present techniques” though it only had a “storage capacity of 10 ^7”. So if it wasn’t storage capacity, where was the bottleneck? As it turns out, the limitation Turing recognized as the hard ceiling on machine development then hasn’t changed since — the ability to program at scale. Turing, no slouch himself, claimed to be able to program at a rate of ‘thousand digits” per day. This metric was how he got his “end of the century” prediction. Turing figured that “about sixty workers, working steadily through the fifty years might accomplish the job”; that is, “if nothing went into the waste-paper basket.” When he acknowledges that “Some more expeditious method seems desirable” it has to rank among the greatest understatements of the 20th century. Indeed one wonders what Alan Turning would think if he were to happen upon a Turing Test in progress today.

Would he be pleasantly surprised that his prediction had come true (+/-14 years), interested that it had been a chatbot masquerading as a 13 year old Ukrainain boy named Eugene Goostman that had first passed the test in 2014 (a triumph that was not without controversy)? Would he be puzzled that it had taken so long? Or would he be flabbergasted to discover that in some machine to machine interactions there are no longer humans involved in the conversation at all, that we’ve reached a stage where machines not only evaluate but train each other outside our view entirely, inside the ‘black box’, to complete our tasks in ways that we can’t comprehend? Toward the end of Computing Machinery and Intelligence Turing remarked that “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour.” This casual observation makes sense to us today but was stunningly ahead of its time — as it came a full two years before the first program that could do any kind of learning was developed (Arthur Samuel’s checkers program) and forty years before the first independent learning machines were built (IBM’s Deep Blue). It’s impossible to know, but based on this I think we can safely assume that Alan Turning would be delighted at the prescience of his predictions and recent developments that have accelerated access to programming at scale (we think he’d definitely be following the No-Code/RPA movement like we are at Charge!); but like most (leading machine intelligence researchers, VCs, entrepreneurs, and the general public, alike) he would also be curious about the next and more probably pressing question: if the machine imitating Eugene Goostman was intelligent enough to pass the imitation game, what are he and his friends up to inside the black box?

My name is Justin Clapper; I’m a Columbia MBA student, former Navy SEAL, and the 2020 Summer MBA Associate at Charge Ventures, an NYC-based pre-seed venture fund. If machines can pass for humans and have been able to do so for years, is it possible that we have lost our way in what T.S. Elliot referred to as the “wilderness of mirrors”? How can you tell if I wrote this, and not Eugene the bot? If you comment, how do I know it’s you? Are “you” human or machine? Does it still matter? Perhaps more interestingly, if indeed the distinction between human and machine ever mattered, what happens when it stops mattering and what does that mean for the future?

This summer, Charge is looking at how humans have used creativity and increasingly advanced technologies to synthetically create and manipulate media, since the beginning of media itself. This series of posts will follow the history of these tools and their use: to obscure the truth in war, to deceive in peacetime, to make money, and to take it from others. We will chart the development of synthetic media through a future where machines paint original works, not only write songs but sing them, and build new worlds on an industrial level.

So what is synthetic media, and why does it matter? Synthetic media is all content, including artificially-generated video, voice, images, or text, for which machines or artificial intelligence (AI) takes over a part, or all, of the content creation process. Synthetic media is platform agnostic and spans emerging technologies, such as artificial reality (AR) or virtual reality (VR), as well as legacy technologies, such as chemistry-based photography, radio, and broadcast television. Like all frontier technologies, synthetic media has and will continue to create new opportunities and means of expression, while simultaneously shuttering old industries and sowing division. Unlike other frontier technologies, however, synthetic media matters because it has the potential to not just create or destroy significant economic activity, or realign the social-order, but because it ultimately has the power to shift how we interact with and understand the very nature of reality itself.

If the future is synthetic, at times, so was the past. In our next post, we’ll look at the origins of synthetic media. Stay tuned!

If you are working in or thinking about the Synthetic Media space, we’d love to connect! Get in touch here: team@charge.vc. See our last report on No-Code here.

Bibliography:

Turing, A. M. “I.-COMPUTING MACHINERY AND INTELLIGENCE.” OUP Academic. Oxford University Press, October 1, 1950. https://academic.oup.com/mind/article/LIX/236/433/986238.

“Computer AI Passes Turing Test in ‘World First’.” BBC News. BBC, June 9, 2014. https://www.bbc.com/news/technology-27762088.

Sample, Ian, and Alex Hern. “Scientists Dispute Whether Computer ‘Eugene Goostman’ Passed Turing Test.” The Guardian. Guardian News and Media, June 9, 2014. https://www.theguardian.com/technology/2014/jun/09/scientists-disagree-over-whether-turing-test-has-been-passed.

Marr, Bernard. “A Short History of Machine Learning — Every Manager Should Read.” Forbes. Forbes Magazine, March 8, 2016. https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/.

Eliot, T. S. “Gerontion by T. S. Eliot.” Poetry Foundation. Poetry Foundation. Accessed June 25, 2020. https://www.poetryfoundation.org/poems/47254/gerontion.

--

--