“This sentence is false”

Martin Vetterli
Digital Stories
Published in
3 min readNov 4, 2018

Where we learn if computers can always take a decision, and what happens if they don’t.

“programming language codes” by Markus Spiske on Unsplash

The other day I was playing chess with a friend and we started talking about the incredible victory of IBM’s Deep Blue computer against the world champion in the 1990s. In those times, the victory meant a huge advancement for our growing field of computer sciences. However, seen as a computational problem, we realised that chess is actually quite “easy” to solve, since it always has a calculable solution. The main challenge in those times was thus to calculate all the possibilities in a useful time, given the board setup.

In our field we call this type of problems to be “decidable”. However, many problems in computer science are not decidable, which means that there is no computer program in the world, even in the future, that will always compute a right answer (no matter how long it takes). It thus leads to wrong answers or the program to run forever (without giving an answer) — meaning a crash of the system.

So can’t one write a smart quality assurance algorithm that checks whether a program is liable to crash or not? This question is actually much more important than it might seem at first sight. For example, consider a complex system such as an airplane guidance system. In this case, a software error can obviously have catastrophic consequences. So will we ever have the ultimate automatic checker for all of our software to know if it will bug or not?

Unfortunately, the answer is no, as was shown by the father of computer science Alan Turing already back in 1936. This is known as the “Halting Problem” and it asks precisely this: if a computer program has a clear description and an input, will it always finish running (and thus be “decidable”) or will it continue to run forever sometimes? Turing showed that no general program can exist that solves this for all possible inputs. In fact, he exploits the circularity of what we are trying to do: writing a piece of software to analyze… software.

If you think of it, we encounter these kinds of self-referential problems in everyday language as well. Consider for instance the phrase “this sentence is false”. This is a classic paradox: if the sentence is false, then the sentence is true — and vice versa. But since the phrase is grammatically correct, there is no simple way out of the impasse other than admitting that, when sentences talk about themselves, sometimes meaningfulness is not assured. Turing showed that the same holds true for computer programs, except that, in that case, the argument can be made very precise mathematically.

Luckily, this proof of impossibility is primarily theoretical and, in practice, we have tools that greatly help the work of software engineers (even if not to 100 percent). Still, the fundamental lesson is one of non-complacency: no matter how clever we are, we should remember that we can never be completely sure that a little bug hasn’t crept in our ever-smarter computing machines, and that no machine can find that bug!

--

--