The Man Machine

Weapons of Reason
Feb 28, 2019 · 8 min read

By the middle of the 20th century, AI had become a subject of serious research. Although computing power still limited its potential, the field slowly evolved from theory to reality.

Words Rockwell Anyoha
Illustration Rick Berkelmans

Image for post
Image for post

In the first half of the 20th century, science fiction familiarised the world with the concept of artificially intelligent robots. It began with the humanoid that impersonated Maria in Fritz Laing’s Metropolis and continued with the heartless Tin Man from The Wizard of Oz. By the 1950s, there existed a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (AI) culturally assimilated in their minds.

Man’s relationship with machines goes back further still, having formed an essential part of western society since the industrial revolution. Even in the early days, machines were doing impressive and complicated work with relatively little intellectual input from their human overlords. A pivotal example was the invention of the semi-automated loom by Basile Bouchon in 1725. Bouchon sought to automate the execution of complex woven patterns by encoding the data of their design on long stretches of punched tape. As the tape moved through his looms, threads would catch onto the punched holes, creating an elegant dance that slowly weaved together the structure of the textile. This interaction between pre-programmed, logical operations and machines was the precursor to the technological conditions that gave rise to “smart” machines. Bouchon’s revolution reduced the loom’s human operator from a skilled worker to a mechanical observer almost overnight.

By the time the 1940s arrived, some of the earliest computers were in use by the British in the Second World War, assisting in the code-breaking efforts against the Wehrmacht. These primitive models executed commands based on a set of logical instructions outlined by the very same method used to program Bouchon’s looms. These computers still lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. They could be told what to do but couldn’t remember what they had done. Two mathematicians, Alan Turing and John von Neumann, envisioned an architecture in which both the instructions and the execution were stored and carried out on the computer.

Image for post
Image for post

The first “stored program” computer, Manchester Mark 1 (MM1), was developed in 1948 by Frederic Williams, Tom Kilburn, and Geoff Tootill at the Victoria University of Manchester. Capable of solving complex problems in mathematics and physics, the MM1 was the first computer made from electronic instead of mechanical parts. It stored information in a space-efficient cathode ray tube rather than on tape. Technological advancements such as this would be essential in realising the intellectual potential of computers.

In 1950, Turing published Computing Machinery and Intelligence, a seminal paper in which he discussed how to build intelligent machines and test their intelligence. Turing argued that if thinking was simply the use of available information to form logical arguments to reach a conclusion, then there should be no reason that computers were not capable of thought. To think like a human would simply require technological advancements to facilitate the storing and processing of more information.

Image for post
Image for post

Such advancements were dependent on two factors: intellectual interest and financial support. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month, meaning only prestigious universities and big technology companies could afford to sail in these uncharted waters. A proof of concept, as well as advocacy from high profile figures, was essential to persuade research funders that machine intelligence was worth serious pursuit.

In 1955, Allen Newell, Cliff Shaw and Herbert Simon wrote Logic Theorist, a program designed to mimic the problem-solving skills of a human. It is considered by many to be the first artificial intelligence program, and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956. This historic conference, hosted by Dartmouth’s John McCarthy, was intended to light the touch paper in the field and saw him bring together top researchers for an open-ended discussion of artificial intelligence — a term coined at the event itself.

Here, artificial intelligence was broadly defined as the ability of a machine to simulate human intelligence. Like Turing, McCarthy and many of his colleagues had high expectations for the potential of AI. They imagined the capabilities of computers to excel in natural language processing, abstract thinking, and self-recognition. In 1970, MIT’s Marvin Minsky — himself a guest at the conference — told Life Magazine, “in three to eight years we will have a machine with the general intelligence of an average human being.” But while the basic proof of concept was there, even today many of Minsky’s goals have yet to be achieved. In spite of this, the significance of DSRPAI cannot be undermined as it catalysed the next 20 years of AI research.

Image for post
Image for post

“Software complex enough to mimic the human brain could well be impossible as we barely understand how the human brain works.”

Optimism was high and expectations even higher, but breaching the initial rose-tinted fog of AI revealed a mountain of obstacles, the biggest of which was the lack of computational power to achieve substantial progressions: computers simply couldn’t store enough information or process it at useful speeds. In order to communicate, for example, one needs to know the meanings of a vast number of words and understand them in multiple combinations. That takes a lot of data that contemporary computers simply couldn’t store. Hans Moravec, a doctoral student of McCarthy at the time, said that “computers are still millions of times too weak to exhibit intelligence.” As patience dwindled so did the funding, and research slowed almost to a halt.

In 1971, IBM released the first microprocessor central processing unit (CPU), which excelled at processing complex information at higher speeds. It was also much smaller and more manageable compared to the previous generation of computers, which used several large integrated circuits rather than squeezing numerous circuits onto a single chip. As a result, computers were able to store more data and execute commands and make decisions faster.

In the 1980s, interest in AI was reignited by two developments: an expansion of the algorithmic toolkit, and a long-awaited boost of funds. John Hopfield and David Rumelhart popularised “deep learning” techniques which allowed computers to learn using their own experience. Simultaneously, Edward Feigenbaum introduced expert systems, which mimicked the decision-making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Subsequently, expert systems were widely used in all manner of industries.

The Japanese government also heavily funded expert systems and other AI-related endeavours as part of their Fifth Generation Computer Project (FGCP). From 1982 to 1990, they invested 400 million dollars into research with the intention of revolutionising computer processing, implementing logic programming, and improving artificial intelligence. Surprisingly, there was little correlation between funding and research progress. The more likely financial drivers of progress were the reduced cost of computers in general, and the appropriation of the field by companies like IBM and, later, Google.

In 1977, personal computers became affordable for the average household with the releases of the Apple II, the TRS-80 Model 1, and the Commodore PET. Tech companies took advantage of these reduced costs to frequently demonstrate the capabilities of artificial intelligence through publicity events. Perhaps the most well known of these occurred in 1997 when reigning world chess champion Garry Kasparov was defeated by IBM’s Deep Blue. In the same year, speech recognition software developed by Dragon Systems was implemented on Windows. In 2000, Cynthia Breazel of MIT developed Kismet, a robot that could recognise and display emotions, and throughout the 2000s, driverless cars were exhibited at the annual DARPA Grand Challenges.

The last major breakthrough in the AI explosion was the invention of the graphics processing unit (GPU), hardware originally designed for rendering images to display on screens. GPUs were widespread in video gaming from 1999 onwards, but it wasn’t until 2007 that the first general purpose GPU was released — usable in general programming and not just making graphics. The difference between central processing units (CPUs) and GPUs is that CPUs use fewer high-power “cores” to execute code, while GPUs can use hundreds of specialised cores to execute code. In other words, if you have a task that is very large but also very simple, the GPU can handle this by breaking it down into hundreds of smaller tasks that run in parallel. With a GPU, neural networks can engage in deep learning at incredible speed.

Image for post
Image for post

What holds AI back now is neither funding nor hardware, but a wait for the arrival of software complex enough to mimic the human brain. Such software could well be impossible as we barely understand how the human brain works, and a neural network is only vaguely analogous to the functioning of biological neurons. Nevertheless, intelligent machines now play an essential role in our everyday lives in a way that Turing, McCarthy and Minsky could only have dreamed of. Through these machines, society consumes and produces information at an enormous rate. Yet the fact that this power already lies at our fingertips is rarely acknowledged. Perhaps this explains our unease at the thought of an approaching era of artificial intelligence. In many ways, this era has already arrived.

Image for post
Image for post

This original article appears in Weapons of Reason’s sixth issue: AI.

Weapons of Reason is a publishing project to understand and articulate the global challenges shaping our world by Human After All design agency in London.

The AI Issue — Weapons of Reason

Artificial Intelligence has already transformed humanity…

Weapons of Reason

Written by

A publishing project by @HumanAfterAllStudio to understand & articulate the global challenges shaping our world. Find out more weaponsofreason.com

The AI Issue — Weapons of Reason

Artificial Intelligence has already transformed humanity for the better, but its downsides are complex and poorly understood. Our next challenge will be how to navigate this uncertainty by designing systems and frameworks that protect us from the potential angers of AI.

Weapons of Reason

Written by

A publishing project by @HumanAfterAllStudio to understand & articulate the global challenges shaping our world. Find out more weaponsofreason.com

The AI Issue — Weapons of Reason

Artificial Intelligence has already transformed humanity for the better, but its downsides are complex and poorly understood. Our next challenge will be how to navigate this uncertainty by designing systems and frameworks that protect us from the potential angers of AI.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store