Alan M. Turing: Decoding A Legend

Apeksha Srivastava
12 min readJun 14, 2020

--

“Alan Turing was the first to make a careful analysis of the potential capabilities of machines.” — Michael J. Beeson, renowned mathematician

Image Source: CityAM

A renowned computer scientist, mathematician, logician, philosopher, cryptanalyst, and theoretical biologist, Alan Mathison Turing was a major contributor to the development of theoretical computer science. Popular as the father of theoretical computer science and artificial intelligence, he provided a formalization of the concepts of algorithm and computation with the Turing machine that is considered as a model of a general-purpose computer. The prestigious Turing Award (effectively, the Nobel Prize in Computing) is named after him.

This article is a small attempt to trace a few contributions of this legend — how he laid the ground-stone for computing when trying to answer some fundamental questions about the very foundations of mathematics and logical reasoning, his thoughts on intelligent machines, and how to characterize them.

Image Source: AwesomeStories

His Brief Life-Story

Born in 1912 in London, Turing has been awarded the Order of the British Empire (OBE) and the Fellow of the Royal Society (FRS) for his unmatched works and services in several fields. He started his academic career with the Sherborne school, then went to Cambridge and later joined the Ph.D. program at Princeton. He worked with British Intelligence (Government Code and Cipher School) for several years. After World War II, Turing moved to the University of Manchester and then to the National Physical Laboratory. He eventually died in 1954, at the age of 41. He was a genius who shaped the domains of mathematics as well as computing for the better!

Stepping Back in Time — Understanding the Context

Calculus Ratiocinator (Image Source: SpringerLink)

Mathematics represents the crystallization of our reasoning capabilities. This demands that we set it out in a very rigorous set of foundations. Now, how do we do that? We start with a bunch of simple and intuitive axioms. The concept of proof for these entities has not yet come and these are supposed to be true. On the basis of these axioms, we develop the entire mathematical field. This is the concept of proof in maths. One of the most successful efforts in this direction came around 300BC from Euclid. He started with formalizing geometry as it was known until his time and defined five postulates (axioms). Similar efforts were undertaken sporadically throughout history. Eventually, William Leibniz was one of the first to take this process in a great deal of seriousness for logic. His vision was that any argument should be representable through calculations — let us calculate and see who is right! He called this system a ratiocinator, a logical calculation framework capable of reasoning. Leibniz’s work was followed up by George Boole, the discoverer of Boolean Algebra, a field that is the foundation of the information age.

One of the most important characters in the history of mathematics was David Hilbert. He made great strides in this field during the 19th and early 20th centuries. His aim was to set down the goals about what mathematics should look at in the next 100 years and he emphasized that there should be no place of intuition in the math proofs. In this new mathematics, nothing should be obvious and everything should be a rigorous proof. Why do we need this? Because maths is the basis of all human logical processes. Therefore, we need to start with axioms that are logically compatible and everything needs to be derived from them! Hilbert meant that if we are able to state a particular theorem and if it happens to be true, we must be able to prove it. Many people were inspired by these statements. Renowned logician Gottlob Frege set out to develop the foundations of mathematics by starting with set theory. However, Bertrand Russell (Nobel laureate), who was young at that time, questioned his definition of a set. According to Frege, a set is any collection of objects, but Russell tossed a paradox that defines a set as the set of all sets that do not contain themselves! Frege acknowledged this in his book and stated that this work of Russell has collapsed the entire framework that he had developed.

So, basically, Frege’s formalization failed, but Hilbert’s question still remains — can we still come up with an axiomatic system where all true theorems are provable? Bertrand Russell and Alfred North Whitehead took it up and started a massive project called Principia Mathematica. The objective was to axiomatize all of the logic and set theory. Still, were we any closer to Hilbert’s goal? Ten years passed and the two mathematicians grew frustrated as they encountered paradoxes within paradoxes and were not able to establish consistency in their works. Eventually, Kurt Godel, a young graduate student, gave a talk in 1930 and outlined what is known as the incompleteness theorems. These were a kind of death knell for a large part of the math foundational quest (over 50 years of effort). One of the most crucial ideas that Godel introduced was encoding the mathematical statements as numbers. All the computers today are built on this idea in some sense!

Another critical question was that given the axiomatic system and a statement, is there an automated way of deciding whether it is provable or not? If we think about it, it formalizes the question — can all of the provable mathematics be mechanized? One of the reasons why this hadn’t been tackled effectively was because it is a little vague in the sense that how to formalize this automated way in mathematics? And, this is where Alan Turing comes into the picture.

The Universal Method

At the age of 23, Alan Turing tackled the above question and stated that what the world needs is a universal prover — one method that can prove all mathematical statements! In 1936, he authored a paper called on computable numbers. Technically, this is known as the start of computer science.

Turing took a couple of steps during this research. First, he utilized Godel’s idea of encoding mathematical statements as numbers. He stated that anything that is provable in an axiomatic system is something that is computable to us. A number is computable if we can have a small set of rules to write down the entire expression of that number. Turing introduced the very simple notion of computing (to him, proving was the same as computing). According to him, the Turing machine is a simple device with a small set of states, and an infinite tape to read and write. Meaning, at every time point, it sort of reads something, possibly changes its state, and then writes something. This is all that is needed in order to compute any number and prove any statement based on a fixed mathematical set of axioms! Next, he said that the description of such a machine is a valid input. Based on this, he described the universal Turing machine. What does it do? It takes as an input the description of another Turing machine and simulates that machine. In simple terms, this machine is the universal prover. It also shows that there exist theorems that are not decidable, meaning that they cannot be proven by any machine.

This universal Turing machine is a direct precursor to the notion of software and stored program computers! Since Turing was deeply interested in engineering, the language is highly engineering-oriented. Alonzo Church made a similar claim during that time, using a different formalism, which forms the basis of recursive function theory. Turing’s and Church’s theories have shown to be equally valid. Later, Church became Turing’s advisor during his Ph.D. at Princeton. The Church-Turing thesis states that despite the simplicity of construction, the Turing machine can encode any possible computation by any physical system. This statement has been proven for a number of systems such as cellular automata, Conway’s game of life, Post’s system, and so on. According to the Church-Turing-Deutsch principle, our universe is essentially a Turing machine and hence, anything that is a non-recursive function is not possible. This brings us to a critical question — are humans also Turing machines? It is an open topic in several branches of philosophy.

The Codebreaker

Image Source: AccuWeather

World War II started in 1939 and the most important strategy employed by the Germans was that of blitzkrieg (rapid troop deployment). It was powered by industrialization and one of the techniques used was the Enigma, a machine that allowed coded messages to be communicated through radio-waves. What’s an Enigma machine? At a very high level, it is a cipher substitution code (replaces one letter by another). This machine is complicated because this substitution rule keeps on changing. How? On observing the physical picture of the Enigma, one can see three rotors. Once a key is pressed, the rotors change, and once they do, the underlying electrical circuitry changes, meaning that the substitution rule has changed. Basically, for every letter used in a message, a different substitution rule is used. It is the reason why this machine is extremely tough to break. However, it is not enough to have a dangerous cryptosystem, it is also about how one utilizes it.

Bombe — The Codebreaker (Image Source: Britannica)

A team of Polish cryptographers used guesswork and clever math about permutation to break Enigma’s code. But, the Germans kept on improving the machine at regular intervals, and eventually, the Polish realized that they did not have enough military strength. What followed was their meeting with British intelligence in 1939 during which they handed over all their findings on Enigma. Around the same time, Turing joined GC&CS and built Bombe (along with some other cryptographers like John Tillman and Bill Tutte), a very fast machine to enable the cryptanalysis of Enigma. From 1939 to 1941, several sophisticated codes were broken (decoded). Two important ones were the North Atlantic Navy code (prevented the sinking of Britain’s cargo ships by U-boats) and Tunny (Hitler’s messages were sent through this code). Long story short, there were more than 10,000 people (mathematicians, cryptographers, linguists, and experts in other fields) who helped in some way or the other to break Enigma’s code.

The Birth of Baby

After having several interactions with engineers and other experts, Turing realized with time that the universal machine has to be electrical. There were computing machines in those times as well, but they were all very specialized. To get a better understanding, imagine stepping into a kitchen. There is equipment to chop, another one to grate, some other helps in cooking, and so on. But, there is no single gadget that can do everything together! Along similar lines, there was no notion of a general computer. Turing and the famous mathematician John Von Neumann had interacted several times to talk about how such a computer would look like.

Image Source: I Programmer

Around 1946, Turing designed a machine called Baby, one of the first instances of Reduced Instruction Set Computer (RISC), which had only 15 possible instructions. This computer was used to write a bunch of different interesting things. Along with Christopher Strachey, a British computer scientist, Turing wrote programs for Baby to play draught (a board game) and write love letters! So, it did very basic natural language processing and, according to Turing, used simple recipes to create sentences from the thesaurus.

In 1945, the famous EDVAC report came out and it essentially characterized the modern notion of a computer called the von Neumann architecture. Turing developed his own proposal in 1946 at National Physical Lab, known as the Automatically Computing Engine (ACE) proposal. In this, he had synthesized a bunch of ideas, not all of which were new at that point. However, there were three that stood out viz., floating-point arithmetic, hardware bootstrap loader (today, when a computer is started, it immediately does so, because it bootstraps itself using a small piece of code — it goes to a particular location and memory and starts executing the instructions from there) and subroutine stack.

Machine Intelligence — Today’s Buzzword!

Alan Turing’s long term interest was to find out how the mind works and whether machines can be intelligent — this is one of the hottest topics of research in the present times. He wanted to formalize and characterize machine intelligence. In fact, Claude Shannon, the father of Information Theory, built one of the chess-playing programs like Turing and they had chess matches between those programs.

Moreover, there is a type-written note by Turing where he has asked a couple of very interesting questions. Can one make a machine that will obey the rules of chess (random legal moves)? Can one make a machine that would solve chess problems? He moved on to ask some complicated questions such as can one make a machine which would answer questions put to it so that it would not be possible to distinguish its answer from that of a man? Can one make a machine which would have feelings like humans? In answer to these questions, he said that he believed the chess-playing problems can be solved. For the next question, he expressed that at least, he had no way of disproving that it cannot be solved. For the last question, he stated that it was tricky — how to know that two persons feel the same way about the same situation, and based on this argument, how can one say that the machine will feel the same way?

Image Source: Amazon

He has also written the first manifesto of Artificial Intelligence, called the Intelligent Machinery. It contains many precise statements on the idea of machine intelligence. Turing also delivered a talk in 1947 that offers a glimpse into this new field, predicting the advent of machines that can act intelligently, learn from experiences, beat the average human opponents at games, and so forth.

The Imitation Game is another great contribution of Turing. The original game existed in the Victorian era. Based on it, he proposed the Turing test. Even amidst a lot of controversies, it to this day remains the goal of AI.

The Interesting Tale of Morphogenesis

Turing worked on this topic for the last few years of his life as a theoretical biologist. He asked a fundamental question. When an embryo is forming, there are only a few cells all of which are similar to one another. Why do these cells differentiate later? Is there a particular mathematical theory that can capture this process? He eventually claimed to have found out the theory of systems of mutually reacting and diffusing chemicals, which along with factors like their instability and the resulting equilibrium inevitably cause this differentiation.

Image Source: YouTube

Unfortunately, right after this theory came the discovery of Watson and Crick’s DNA model. Hence, this theory of morphogenesis went unnoticed for a long time. But, in recent times, there has again been a surge in this topic. A number of works have popped up which say that DNA and its regulations don’t really explain everything! So, maybe Turing’s theory is worthwhile to look at.

To sum up, Alan M. Turing’s contributions made in a span of just two decades continue to influence diverse domains, in some way or the other, until now.

— — — — — — — — — — — — — — — — — — — — — — — — — —

This article is based on one of the sessions (delivered by Anirban Dasgupta, faculty in the Computer Science and Engineering discipline) of the Virtual Seminar Series by IIT Gandhinagar. It is an online program started by the Institute in the wake of the current pandemic as a means to engage the people so that they can learn about a diversity of topics from the comfort of their homes, in an interesting manner. (The 7th article of this series can be found here. The 9th article is available here.)

--

--

Apeksha Srivastava

Writer | PhD student, IIT Gandhinagar | Visiting researcher, University of Colorado Colorado Springs | Ext. Comms., IITGN | MTech(BioEngg), Gold Medalist, IITGN