The Limits of Computation

Turing finally settles whether a computer can solve any problem put to it

Karl Beecher
Great Moments in Computing History
7 min readJan 26, 2016

--

In the 1920s, a community of mathematicians believed that they were on the road to making the ultimate discovery that would far outdo any other discovery. Their aim was no less than the creation of a new general method that could solve any mathematical question.

David Hilbert was the first to draw serious attention to this aim. His tombstone is in Göttingen, a town in central Germany. What is engraved on the tombstone says a lot about his optimism in face of this question:

Wir müssen wissen.

Wir werden wissen.

Which translates as:

We must know.

We will know.

It was Hilbert who first launched the enquiry into whether all problems are solvable. He proposed that any problem, defined properly, could be solved by writing an appropriate algorithm. If Hilbert was right, there would no longer be any unsolvable problems. All a mathematician had to do was feed the parameters of the problem into the method, run it, and a solution would be output. It could even be realised in machine form, meaning a mathematician’s job would be to set some input switches, pull the handle on some contraption and read the output…just like Babbage’s Analytical Engine. Never again would a mathematician have to say, “I don’t know”, or “There’s no way to work that out”. In fact, with some oracle-like automaton in existence answering every question, we might never need mathematicians again!

Throughout the 1920s and 1930s, a group of mathematicians worked on finding the super-algorithm, an all-knowing process that could answer any mathematical question. But these mathematicians, who we can call the first generation of computer scientists, were sorely disappointed. As it turned out, there are limits to what we can ever hope to compute. Hard, fundamental limits.

Not long after Turing et al. showed that any properly-formed series of steps can be expressed as an algorithm, it was discovered that there exist certain types of mathematical problems that are not solvable with an algorithm. Disappointingly, this means that we are able to come up with perfectly valid problems that we cannot solve no matter how hard we try. In such cases, we may find shortcuts or heuristics that yield answers we judge to be “good enough”, but with our current understanding of mathematics, a provable best-solution is beyond us. The key to understanding why comes when you try to ascertain whether or not a program will ever halt.

When I talked about Euclid’s algorithm, I pointed out that it uses iteration and therefore keeps looping over the same instructions until some condition is met, at which point it halts. I also dismissed any potential worries about the condition going unmet and thus preventing the algorithm from halting (in other words, going into an infinite loop). Failing to halt is a justifiable concern. Every programmer today is extremely careful about using iteration like this, and does their best to ensure a looping program will halt at some point. This is not just a matter of convenience. A program has to halt in order to report a solution. No halting, no solution. In the case of individual algorithms like Euclid’s, this can be easy to prove. But some algorithms are very complex, and thus the proof can be time-consuming and error-prone.

Alan Turing knew this, which is why he asked if it would be better to devise an algorithm that would actually take as input another algorithm in order to see if it would halt. Turing’s idea was for a method that could analyse an algorithm to see whether it would halt (and thus give as solution) without actually running it. To clarify this point, let’s refer to the input algorithm being analysed as a program. The halt-prover algorithm would take the program, analyse it, then spit out either True or False as the output (True: the input program will halt at some point, or False: it will never halt). Such an algorithm would be of enormous benefit to today’s programmers, because never again would they write a program that risks getting into an infinite loop. Unfortunately, scour the world as you might, you won’t find such an algorithm, because Turing proved in 1936 that such an algorithm cannot exist. In so doing, he proved at a stroke that unsolvable problems are a reality. His trick was to use a proof by contradiction.

Turing proved in 1936 that unsolvable problems are a reality.

In a proof by contradiction, you simply make an assumption then demonstrate that if your assumption is true, it would lead to an impossibility and is therefore false. As a simple example, we could prove by contradiction that a triangle may have at most one right angle. To do this, imagine a triangle with three angles — A, B and C — and then assume that angles A and B are both right angles (i.e., 900 each). However, we know that the sum of a triangle’s interior angles is 1800 (A + B + C = 1800). If A and B alone add up to 1800, then C must be 00, thus producing a triangle with only two angles. This is impossible by definition, so the original assumption must be wrong. QED!

In his own case, Turing began by assuming that a halt-prover actually existed. He then showed that a consequence of the halt-prover’s existence would be impossible, thus demonstrating that a halt-prover can’t exist. Here’s how he did it.

If a halt-prover existed, it would have a form like this:

P, D → halt-prover → True/False

This follows the same form as our input-process-output model of computation spoken of in previous articles. We take some data (D) and input it to a program (P) then input them both to the halt-prover. We have to examine the data, because the data can determine how the program behaves — the program may halt for some sets of data but loop forever with others. The halt-prover, our process, tells us whether the program will eventually halt if we execute it using this data.

Turing’s next step was inspired. He examined what would happen if the data supplied to the program was the program itself. In other words:

P, P → halt-prover → True/False

Strange as it sounds, feeding a program as input to another program is a perfectly valid thing to do and happens every day in the computing world.

The paradox program

What is the consequence of this for our halt-prover? Put simply, when we try to use the halt-prover, we are led down a paradoxical path. Let’s write a new program called paradox that makes use of the hypothetical halt-prover (shown in the image above). This simple but troublesome program takes the output of the halt-prover test and makes a decision based on whether it’s true or false. If false (P will halt), then paradox also halts. If true (P will never halt), then paradox enters into an infinite loop.

Now it gets really mind-bending. We know that a program can accept itself as an input — so what if we input paradox to itself. When we do this, the halt-prover tries to analyse paradox to work out if it will eventually halt. There are two possibilities, each with a consequence:

  • If the halt-prover claims that paradox will eventually halt, then paradox will loop forever.
  • If the halt-prover claims that paradox never halts, then paradox will halt.
The two eventualities of the paradox program

In this case, we made only one assumption: that an algorithm exists that can tell us if a program eventually halts. If this single assumption has led to an impossible paradoxical outcome, the unavoidable conclusion (as per the proof by contradiction) is that the assumption is wrong. A halt-proving algorithm cannot possibly exist. Which means we cannot analyse an algorithm to see whether or not it is solvable. This finding, now known as the Halting Problem, proves that there is such a thing as an unsolvable problem, at least by any computational means. For those mathematicians seeking the all-knowing algorithm, it was a punishing blow.

I don’t know for sure whether Alan Turing was among those who hoped for the existence of a super-algorithm. Perhaps he was but refused to believe it until the matter was resolved beyond any doubt. Or maybe he feared a future that didn’t need mathematicians and so did his best to demolish the whole notion. In any case, he did the right thing. He took an extraordinary proposition and tried his best to make sure it survived scrutiny. In this case, the proposal collapsed under testing. However convoluted you may find Turing’s paradox example, it demonstrated a monumentally important fact: unsolvable problems exist.

1 The notation “halt-prover(P, P)” is a more convenient alternative to “P, P → halt-prover”, which means P and P are the inputs to the process called halt-prover.

--

--

Karl Beecher
Great Moments in Computing History

Budding novelist. Recovering academic. Available now: INTERSTELLAR CAVEMAN (http://amzn.to/2X9C9RP)