Most other languages I know of let you specify that a number is an integer ;)
EDIT: wow this was a very bad brain fart, and I’m leaving it here as a testament to how quickly people will pile on on Javascript without thinking. Sorry everyone.
Multiple them by enough zeros to turn them into integers. Do the math in integer form for integer rules (1 + 2). Reverse the process to get float results.
I’ve done that before a long time ago. I can’t remember why I did. Maybe just to dodge floats’ problems during number crunching. I know peoples’ reactions to it were funny.
This behaviour is not unique to JavaScript. It is seen in every programming language where doubles are available, including C, C++, C#, Erlang, Java, Python and Rust.
It’s actually kind of interesting that you can exactly represent doubles in decimal but you can’t exactly represent decimal numbers as doubles. This is because 2 divides 10 but 10 does not divide 2. So with enough decimal digits, you have enough 2s in the denominator to exactly represent a fractional binary number.
If you want it to work correctly, then probably SPARK, C w/ tooling, or Gappa w/ language of your choosing. If performance isn’t an issue, there’s a pile of languages with arbitrary-precision arithmetic plus libraries for those without it. I’d say that’s the options.
Meanwhile, there’s work in formal methods on Exact-Real Arithmetic that could give us new options later. There was an uptick in it in 2018. I’m keeping an distant eye on it.
Replace JS for your favorite language:
https://0.30000000000000004.com/
JS just happens be everyone’s favorite punching bag nowadays ;)
Most other languages I know of let you specify that a number is an integer ;)
EDIT: wow this was a very bad brain fart, and I’m leaving it here as a testament to how quickly people will pile on on Javascript without thinking. Sorry everyone.
0.1 + 0.2 doesn’t work well for integer math either.
Multiple them by enough zeros to turn them into integers. Do the math in integer form for integer rules (1 + 2). Reverse the process to get float results.
I’ve done that before a long time ago. I can’t remember why I did. Maybe just to dodge floats’ problems during number crunching. I know peoples’ reactions to it were funny.
I’m going to point out that COBOL handles this tricky calculation “unsurprisingly” by default. We should all switch from JavaScript to COBOL.
Upvoting because the edit is a good example of self-awareness and humility.
the author does state:
I very much like the way this blog presented the numbers out in full, that helps to see what’s really going on behind the scenes.
It’s actually kind of interesting that you can exactly represent doubles in decimal but you can’t exactly represent decimal numbers as doubles. This is because 2 divides 10 but 10 does not divide 2. So with enough decimal digits, you have enough 2s in the denominator to exactly represent a fractional binary number.
Great and creative way to set the tone!
Yeah computers are stupid
John von Neumann, probably.
What language should one use for floating point calculations?
TCL, of course, because EIAS!
In most languages there are libraries to deal with precise numbers. For example, you can use Decimal in Python. https://docs.python.org/3/library/decimal.html
But it may not be as fast or have other inconveniences (for example, defining explicitly the precision)
There are some good examples in the list @HugoDaniel posted in this thread.
Here’s something interesting for JS: http://mikemcl.github.io/decimal.js/
If you want it to work correctly, then probably SPARK, C w/ tooling, or Gappa w/ language of your choosing. If performance isn’t an issue, there’s a pile of languages with arbitrary-precision arithmetic plus libraries for those without it. I’d say that’s the options.
Meanwhile, there’s work in formal methods on Exact-Real Arithmetic that could give us new options later. There was an uptick in it in 2018. I’m keeping an distant eye on it.