Javascript can’t math gud.

Javascript can’t math good, and that’s Okay.

To be honest, you can’t really blame it, the basic problem is inherent to all computer systems.

Think of it like this- How many bits does a computer program have to represent a number? It depends (Javascript uses 64) but the gist of the problem is-: finite or infinite bits? Well- how many digits might a number have? The answer is- not finite.

Therefore, at some point- a computer will run into the problem of trying to represent an infinite amount of numbers within a finite space.

The problem isn’t hard to run across:

var x = 0.1;
var y = 0.2;
var z = x + y;
console.log(z)
0.30000000000000004

What is z? If you say “0.3" Javascript would like a word with you. Try it here

Decimals are a representation of fractions. We usually use base 10. For example 0.5 is the representation of 1/2. 0.5 is a finite representation of 1/2. We know that it is exactly right, no more numbers are expected to come shlupping in after the ‘5’. However, look at the decimal representation of 1/3- we represent it as a base 10 decimal of 0.3333, and eventually we just quit writing. We know we leave it as an unfinished job, but society agrees we can just quit at some point.

The number at issue could be a very large or very small number- the relevant issue is the number of digits involved. At some point- even the best computer throws its digital hands in the air, coughs politely and rounds off, figuring you aren’t likely to notice. And the truth is- for most applications except math itself- you aren’t likely to notice. (Even NASA only uses 16 digits of Pi when calculating interplanetary travel. With 39–40 digits you can calculate the diameter of the visible universe to the diameter of a Hydrogen atom. )

“Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation… Given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.”¹

Rounding Errors in simulations and math problems.

Unforeseen responses in computer programs.

Ways in which floating point errors can affect computer system behavior:

• Computer Architecture usually have floating point instructions.
• Compilers must generate those floating point instructions.
• Operating systems must decide what to do when exceptions are raised for those instructions.

So- TLDR: Computers are tangible, finite boxes which must theoretically store and manipulate numbers which can be infinitely long. To

Sources:

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.