Don’t deal your money in Float or Double

Money, Something we all want. It is so important that calculation mistakes cannot be accepted. When you do operations on billions then mistake of 0.0000001 can also screw your life. So why you should never use floats or double to write systems which deal with money?

When it comes to money you need to be precise and floating point arithmetic is prone to errors. Floating point numbers can not represent all the real numbers, more precisely rational numbers. After some level of precision, left-overs are omitted or rounded. Let’s understand this with and example.

You have 1/3 and you want to represent it in float till 2 precision. The output will be 0.33 and rest of the digits will be removed. But the problem is 0.33 is not the precise value of 1/3.

To get more insight on this suppose you got some money from three different resources. The amounts are 30 cents, 60 cents, 10 cents. You simply want to know how many dollars did you get?

This is what you would do, looks easy. What can go wrong in this?

It seems you just lost some cents. Now this is very small addition operation. In real life software systems we deal with much more complex calculations. A very small error can lead to cascading effects. So this is the reason that when you deal with systems which require high precision, float and double should be the last choice. Use decimals as this is what humans understand better.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.