0.1 + 0.2 == 0.3 Evaluates to False!

Divya Chatty
The Startup
Published in
6 min readOct 5, 2020

Case File #001 — HS Investigates

This column has been written by my guest writer Hermione Snow — AKA HS. About HS: HS is witty, determined to a fault and passionate about Data Science. Whenever HS questions something, she doggedly pursues it till she understands it. She also has a knack for writing notes.

Fire up your Jupyter Notebook and type the following few commands. Don’t execute them just yet.

Type this set of commands in Jupyter

What do you think the answer will be?

It is obvious isn’t it? Going by whatever we have learnt in life, the sum of 0.1 and 0.2 is 0.3. Consequently, we also expect that the if condition will evaluate to True and the output will show the string: “Sum of 0.1 and 0.2 is 0.3”.

Now, go ahead and execute the commands. What do we see?

Output for code piece

Isn’t that surprising? It seems that our conventional mathematics does not seem to apply here! Isn’t that a contradiction of whatever we know?

Is this erratic and seemingly irrational behavior true for all floating mathematics? Let’s check.

This seems to be working fine

So, what’s special about 0.1+0.2? Its a puzzle isn’t it?

Problem Statement: Why does 0.1 + 0.2 evaluate to False?

Is floating mathematics fundamentally flawed in computers?!?

Is floating mathematics fundamentally flawed in computers?!?

Let’s start simple. Just print 0.2 as it is. And then go ahead and print 0.3. Everything is fine so far. Now, go ahead and print 0.1 as it is. This seems fine too.

Everything is fine, so far

Now, go ahead and print 0.1+0.2.
In the place of 0.3, we get a rather funny output.

A rather funny output

What’s with all the zeroes? And where did the 4 come from?

If your instinct right now, is to fling something at your computer, hold your horses. Your computer has not gone crazy, and is not behaving in a way that it shouldn’t.

In fact, it is doing exactly what we taught it to do.

A computer fundamentally cannot understand anything beyond a 0 and a 1. Those who are familiar with a little bit of Boolean algebra will know that, a computer represents numbers, not in the decimal system we know, but in a binary method — in terms of powers of 2.

The computer, for the most part, can represent integers (positive and negative numbers) easily and without errors (provided you have the sufficient number of bits).

But when it comes to expressing floating numbers, there is a limitation. We need to know a little bit about how floating numbers reside in the computer.

Enter, IEEE Standard 754.

The IEEE Standard 754 is the set of standards by which most computers understand and work with floating numbers.

Consider what this Standard has to do. There are infinite numbers between 0 and 1. There are also an infinite number of numbers between negative infinity and positive infinity. The task of the Standard is to represent those numbers in just 16/32/64 bits. For comparison, 2⁶⁴ is 18,446,744,073,709,551,616.
A huge number to be sure, but still a long way off from infinity, by any stretch.

How does the standard work then?

It works on a system of compression — Compressing the bits to the nearest value possible, in the confines of the bit system under use.

In an n bit system, the 1st bit (from the left) is always reserved for the sign. The rest of the bits are divided into 2 unequal parts: the exponent and the mantissa.

I am not going into the details of how this system works. You can find excellent resources and examples at the end of the post, if you are interested.

The general idea of its working is sufficient to understand a crucial point here. The exponent is used to identify the region on the number line where the number exists. For example, an exponent value of 3 (011, in binary) indicates that the number we are trying to map is in the range of [2³, 2⁴].

The mantissa is used to zoom in within that range and find the exact location of that number.

Consider this useful example and analogy:

Consider an example of a postman trying to deliver a letter. He travels in his mailvan and deposits the post directly in the postbox. Assume, for the sake of this example, that he cannot get down from the mailvan at any point of time on his delivery route. He has to stay inside it at all times.

He locates the general whereabouts of a person’s home by the area code. He drives to that area and looks for the home by using the door number. He is almost always accurate in his deliveries.

One day, in a particular case, he finds that the mailbox of one receiver is in a small, narrow lane, unreachable by the mailvan. So, he drops the post in the nearest possible mailbox and leaves.

Don’t ask me if what he did was wrong or right. Consider this analogy instead.

Computer vs Mailvan Analogy

I hope, that from this analogy, it is clear that some floating numbers, not all, are extremely difficult to express in a computer.

Under this system of compression, there is a tradeoff between the range of numbers that can be represented and their respective precisions.

Now, let us come back to the number 0.1. The problem lies in the representation of 0.1.

We can never have enough bits to represent 0.1 with absolute precision.

This is how 0.1 looks in binary up to 1369 places, and counting!

Credits: https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/

That is why, our computers approximate the value of 0.1 up to how many ever bits are allowed in our system.

A useful way to know if your floating number can be written precisely in a n bit system is to check the denominator of its fractional representation.
For example, 0.5 can be accurately written as 0.1 in binary because,
0.5 = 5/10 = 1/2.

The denominator is a power of 2, and so, its binary representation is precise.

And so, that is why 0.1+0.2 evaluates to a weird number 0.30000000000000004!

“But, HS! Fine, we understood how the floating numbers are represented. But, our code still failed even though our logic is right! How do we solve the problem?” — Is that what you’re saying?

Fret not.

You specify the precision level you are expecting, and et voila!

Solution to our problem

The main takeaway from this case is this:

The key is not in the correctness of your logic, the key is in the precision level you are operating at!

Case Closed!

I hope this was informative for you!

Final Case Scribbles:

But, I am still not satisfied. There seems to be something I am missing. Let me leave you with an intriguing question.

We know that the world relies on computers for banking. Computers are essential for space exploration and scientific studies. The calculations involved in these sectors are extremely complex and precise. No errors can be tolerated at all.
So my question is this:

If floating mathematics is so imprecise, how in the world does a rocket go up in space correctly?”

The answer will be discussed in the next post. Watch this space.

This is HS signing off. See you in the next post. Happy investigating!

--

--

Divya Chatty
The Startup

Learner | Aspiring Data Scientist pursuing Master’s in Data Science & ML | An avid reader & a sufficiently proficient writer | www.linkedin.com/in/divya-chatty