Rob, mate, what a way to concisely make my point for me! Also great link to back up my arguments and show the limitations you’ve gracefully described to my humble post. But it’s not cool to open with a misleading contradiction…
Thanks for explaining to readers in lower level details why “some” software languages fail to implement mathematical precision, it’s hard to build on such binary constraints… #amiryt
In case you’ve misled anyone, software developers, particularly layer 3 language programmers, are expecting arbitrary-precision arithmetic
Like a scientific calculator
It’s expected because they’ve not been taught by universities there’s a practical difference between (to abstract the concept) raw compute vs calculators.
What an analyst, mathematician, or statistician will require (calculator precision) they don’t know how to, or that they even need to, tell a programmer they need arbitrary-precision arithmetic because they don’t know or should care what the tool a developer uses ( the programming languages) is capable of producing.
We are all learning everyday right? keep an open mind.
In case you want to respond about binary again, don’t, we’re (I’m) all quite clear on your points and agree with you, obviously, you’re describing facts that are basic entry level computer science. You’re clearly from academia. In industry though we care about implementation and practical experiences, and the key concept we are discussing here is called Arbitrary-precision arithmetic, or mathematical precision in layman, and not entry level CS.
For the every day developer, it’s theoretically possible to be 100% meticulous and use any language in a way to be precise, but how many developers have you met that are perfectly meticulous?
Debugging is the practice of removing bugs from software, therefore programming is the practice of.. adding bugs..
To be successful in life, set yourself up for success, choose the appropriate tool for the task. This is entry level industry. So for calculating with precision use a programming language with arbitrary-precision arithmetic.