The Long and Troubled History of 0.999… = 1

The Past
Long before the introduction of infinite decimals, a major problem was how to convert fractions into finite values so that quantities of goods could be measured accurately. 1/3 could be represented in bases that had 3 as a factor, like base 12, but it could not be represented in base 10. Ancient civilisations often worked in base 60 because lots of common fractions could be represented in that base. Indeed, this is how it came about that there are 60 seconds to a minute and 60 minutes to an hour. Sadly, far too often uncommon fractions were encountered that could not be converted into a finite value. Ideally we wanted a way to represent ALL quantities in a single number system. To this end, in the 16th century, Simon Stevin created the basis for modern decimal notation in which he allowed an actual infinity of digits.
At the time, the concept of an infinite decimal was very controversial. It was well known that the Ancient Greek philosopher Zeno of Elah had devised several paradoxes with the intent of showing that continuous motion was impossible, and that his core argument could be simplified into abstract logic about numbers. This logic appeared incompatible with the concept of an infinite decimal.
Consider a length of 1.1 miles or kilometres or whatever units you desire. This length can be thought of as the length 1 followed by the length 0.1 without anything between them. At the same time, it can be thought of as the lengths 0.9, 0.09, 0.009, 0.001 and 0.1 all with no space between them. It does not matter how we choose to divided up the first length of 1, there must always exist a length IMMEDIATELY BEFORE the trailing length of 0.1. Therefore if the length 1 can exist as 0.9 + 0.09 + 0.009 + … with ‘infinitely many’ parts then, since there are no gaps in our total length of 1.1, there must be a LAST PART in the infinitely many parts that is connected to the trailing 0.1 length. This appears to contradict the concept of ‘infinitely many’ which requires no last part. So it seemed the concept of an ‘infinite decimal’ was already defeated about 2,000 years before it was introduced.
But perhaps finding a way to express all fractions in a single measurement-friendly number system was considered more important than a few old paradoxes. Even so, for over 200 years that followed, mathematicians were troubled by infinite decimals. In order to determine that 9/10 + 9/100 +9/1000 + … all add-up to 1 would require an infinite amount of work. Another problem is that if we add more terms then we have still only added a finite number of terms. It seemed that the addition of infinitely many terms was impossible.
Embarrassingly, in order to avoid admitting that we cannot add up infinitely many non-zero terms, they had to use the trick of saying 0.9 + 0.09 + 0.009 + … cannot add up to anything other than 1, so it must add up to 1. There were still more problems. If you take any of the infinitely many terms in the series 0.9 + 0.09 + 0.009 + … then the sum to your chosen nth term is given by the expression 1 – (0.1)^n which means that the sum is always a non-zero distance away from 1. This holds for ALL of the infinitely many digits meaning that no term CAN POSSIBLY EXIST where 1 is reached. This variation on Zeno’s dichotomy paradoxes appeared to be solid proof that 0.999… cannot equal 1.
Then there was the algebraic proof. We start by saying if:
x = 0.999…
Then it follows that:
10x – x= 9.999… – 0.999...
And since this appears to simplify to 9x = 9 it would seem to prove 0.999… equals one. The problem with this proof becomes clear when you think of 0.999… as the series 9/10 + 9/100 + 9/1000 + … If we multiply this series by a factor of ten then we don’t change the number of terms; we have the same terms one-for-one as we started with, only now each term is ten times its original value. The subtraction 9.999… – 0.999… cannot cancel out all the trailing terms unless this one-to-one relationship (between the original and the multiplied series) is somehow broken, and we get an extra term out of nowhere.
As a further check if the trailing terms can all cancel out, consider the general formula for a geometric series, G, with first term ‘a’ and common ratio ‘r’:
G = a + ar + ar² + ar³ + …
Note that 0.999… is the geometric series with a=0.9 and r=0.1. The question is can we multiply throughout by 1/r, then subtract what we started with in the same way that we did with 9.999… – 0.999…?
(1/r – 1)G = [a/r + a + ar + ar² + …] – [a + ar + ar² + ar³ + … ]
If we assume that all matching terms cancel out (to ‘infinity’), this simplifies to:
(1/r – 1)G = a/r
The above should apply to all geometric series, both converging and diverging, because none of the manipulations depend on the values of the variables. So if we can find any values for the variables ‘a’ and ‘r’ where the above statement forms a contradiction, then we will have shown our assumption that all trailing terms cancel out was a mistake. The values a=1 and r=1 make the above statement evaluate to 0 = 1 and so the algebraic proof for 0.999… = 1 must be invalid.
Even the argument that there is no number between them fails. We begin by assuming that the series with the nth sum of 1 – (0.1)^n, i.e. 0.999…, is a different number to 1. Now if we say 1 is the series that has the nth sum of 1 – 0^n then we can easily find a series halfway between 0.999… and 1, which is the series with the nth sum 1 – (0.5)(0.1)^n and so it is easy to find as many ‘numbers’ as we like between 0.999… and 1. We cannot presume that when we convert these series into decimal form they will all become equal to 1, because that would mean that our starting position is that 0.999… already equals 1.
In the early 19th century Bolzano and Cauchy introduced the apparatus of limits and convergence. Now you should no longer think of 0.999… as the endless sum 9/10 + 9/100 + 9/1000 + …, instead you should think of it as being the ‘limit’ that the increasing (partial) sum is approaching. The increasing partial sum was called a ‘sequence’.
By saying this sequence ‘converges to’ a limit, it created the impression of reaching a constant value. This idea was reinforced by claiming that a converging series is ‘summable’. The paradoxical operation of adding up infinitely many non-zero quantities had seemingly been solved by changing what ‘summation’ meant. The ‘sum’ of a series was now defined to be the limit. Confusingly the traditional meaning of ‘sum’ still remained as well (and so some people still think the algebraic proof is valid).
This double meaning of ‘sum’ was not the only issue with this approach, but the other problems were less obvious…
With the limit approach, when you see the symbol 0.999… you should think of its value as being what is returned from the function: THE-LIMIT-OF[9/10 + 9/100 + 9/1000 + …]. This approach was generalised, so that all decimals were then said to contain endless digits, and their values were deemed to be the limits of their increasing partial sums. For example, 2.5 would now be shorthand for 2.5000… (i.e. all finite representations now carry an implied ‘infinitely many’ trailing zeros).
The first problem is that if this limit function returns a decimal value, then in order to assess the value of that decimal we again need to call the limit function. We end up in an endless loop of calling the limit function. To avoid this problem, we claim that when we call this limit function for 0.999… then it returns the rational 1.
But the limit cannot always be described as a rational. For example, THE-LIMIT-OF[4/1 – 4/3 + 4/5 – 4/7 + …] cannot return a rational and it cannot return a decimal, all it can return is the symbol pi. We have to imagine that this symbol can equal a constant value. Some mathematicians, who call themselves ‘Finitists’, still do not accept this imagined existence.
A second problem is that in order to convert ‘infinitely many’ terms into a constant like pi, the function would have to do an infinite amount of work. Thirdly, if it processes more and more terms it will still only have processed a finite number of terms. How can the function process the actual infinity of terms in pi to find its constant value? If these problems mean the limit approach is flawed, then it is no longer valid to define 0.999… as a limit. We are then back to the situation where the logic of Zeno’s dichotomy paradoxes show it cannot equal 1.
Note that if we think of pi and the square root of 2 as functions that allow us to get as accurate a real-world measurement as we need, then we don’t have the infinity-related problems that we have if we try to think of them as constants. Sadly this approach does not fit with the mainstream Platonist position which is that perfect forms, like a perfect circle and a perfect diagonal of a unit square, MUST somehow exist.
The Present
When we introduce our children to numbers we avoid telling them about these issues. We avoid telling them that to understand the value of a decimal we need to do the sometimes impossible task of finding a limit, and we teach them that to find the sum of several numbers we add them up.
We show them that when we divide 1 by 3 in base 10 we get a long string of 3s after the decimal point plus a remainder. We don’t tell them that the long term trend is always, always, always a long finite decimal plus a non-zero remainder. Instead, as if doing infinitely many things is no issue at all, we calmly and confidently tell them them this pattern leads to the conclusion that one third can be represented with ‘infinitely many’ digits. Once they have accepted 1/3 = 0.333… then we simply times both sides by 3 to get 1 = 0.999…
We might tell them we cannot find numbers between 0.999… and 1, despite the known flaws in this argument. We might tell them the formula for finding the limit of a geometric series is the formula for finding the “sum of an infinite geometric series”. This creates the impression we have a magic formula for adding up infinitely many terms.
In short, we use our authority as knowledgeable adults together with a series of tricks to convince our unsuspecting young students that 0.999… equals 1. But the question “does 0.999… equal 1?” will not go away because the many issues that have plagued infinite decimals since their conception have yet to be resolved.
