Isn’t the issue that all numbers in javascript are floats? As soon as you use division, you may have a problem even if you’re trying to keep everything in ints.

If you divide integers without thinking, you won’t get good results either. 3 / 2 == 1.

Real world quantities are rarely infinitely divisible. You always have to think about the remainder and where it goes, regardless of the representation your computer uses to perform the calculation.

Trivial example: 3 roommates split the $1000 rent. If they each pay $333.33, they’ll come up a penny short. Whatever process the roommates use to decide who pays the extra penny, you have to add that code to your rent splitting computer program. It’s not going to happen by magic.

The problem is not floating point, per se; it’s omitting the large part of the process that deals with edge cases when translating the real world into a program.

Buggy code is going to be buggy. A better exercise is to try writing correct code. The correct version of the above isn’t too difficult. Then compare that to writing the same code but using an amount of 1.00 dollars.

Working in cents doesn’t automatically make the code correct, but at least it allows you to write correct code.

Calculate the split and remainder, then distribute the remainder afterwards.

cents = 100
splits = 3
share = Math.floor(cents / splits)
remainder = cents % splits
for (i = 0; i < splits; i++)
people[i] = share
while (remainder--)
people[remainder]++

This will give you people = [ 34, 33, 33 ].

Trying to do this with dollars is impossible because you can’t store either 0.33 or 0.34 in a float. The best split that can be accurately represented would be 0.25, 0.25, and 0.50. (You could come close by allowing fractional pennies. 0.3125, 0.3125, 0.375 add up to exactly 1.00, but you still have the problem that nobody has fractional pennies and rounding after the fact is extremely error prone.)

when you need a perfectly accurate representation, with a known level of precision (like money, or uptime), fixed point numbers (or at least some take on the concept, a la twips) would be your best bet.

I’ve long been on the lookout for a JavaScript BigDecimal Library. This one is by far the best I have found https://github.com/MikeMcl/big.js
The title of this article is a bit misleading, these problems exist for any language that uses IEEE 754 floats.

Stuff like this makes me really unhappy:

1.1 + 1.01

2.1100000000000003

Especially when I want some kind of versatile way to display that information according to units. Money, uptime, etc.

Always work in cents, seconds, etc. Units small enough to not need further division.

Isn’t the issue that all numbers in javascript are floats? As soon as you use division, you may have a problem even if you’re trying to keep everything in ints.

If you divide integers without thinking, you won’t get good results either. 3 / 2 == 1.

Real world quantities are rarely infinitely divisible. You always have to think about the remainder and where it goes, regardless of the representation your computer uses to perform the calculation.

Trivial example: 3 roommates split the $1000 rent. If they each pay $333.33, they’ll come up a penny short. Whatever process the roommates use to decide who pays the extra penny, you have to add that code to your rent splitting computer program. It’s not going to happen by magic.

The problem is not floating point, per se; it’s omitting the large part of the process that deals with edge cases when translating the real world into a program.

That’s not my point. The grandparent post said:

In js, if you pretend to try to use int’s you get:

So working in “cents, seconds, etc” fails. It just fails differently than you expect if you’re expecting JS to be using integer math.

Buggy code is going to be buggy. A better exercise is to try writing correct code. The correct version of the above isn’t too difficult. Then compare that to writing the same code but using an amount of 1.00 dollars.

Working in cents doesn’t automatically make the code correct, but at least it allows you to write correct code.

So, what’s the correct version of the above? I don’t know JS very well; I’m curious.

Calculate the split and remainder, then distribute the remainder afterwards.

This will give you people = [ 34, 33, 33 ].

Trying to do this with dollars is impossible because you can’t store either 0.33 or 0.34 in a float. The best split that can be accurately represented would be 0.25, 0.25, and 0.50. (You could come close by allowing fractional pennies. 0.3125, 0.3125, 0.375 add up to exactly 1.00, but you still have the problem that nobody has fractional pennies and rounding after the fact is extremely error prone.)

when you need a perfectly accurate representation, with a known level of precision (like money, or uptime), fixed point numbers (or at least some take on the concept, a la twips) would be your best bet.

I’ve long been on the lookout for a JavaScript BigDecimal Library. This one is by far the best I have found https://github.com/MikeMcl/big.js The title of this article is a bit misleading, these problems exist for any language that uses IEEE 754 floats.