You could do that, but it’s error-prone, complicated, and probably unnecessary.
Why should a maintenance programmer have to deal with getting the rounding right?
This is a particularly bad idea if your program supports plugins - Java, GIMP, or VirtualDub.
His argument is that you have to do corrections for accurate floating point, but you have to do corrections with fixed point and integers anyway if you’re dealing with taxes or percentages. So using floating point isn’t meaningfully worse.
As for why, isn’t it interesting to analyze conventional wisdom?
As for why, isn’t it interesting to analyze conventional wisdom?
I’d prefer more rigor than
Let’s start by discussing how to get the “right” answer for financial calculations (I have not worked in banking, so please correct me if I get this wrong)
I am not a theoretician and have not proven that this is actually correct. Let me know if you find a counter-example.
If it is good enough for Excel, it will be good enough for most applications.
[…] with floats, it’s so hard to be sure that you won’t end up with a leftover 2e−64 or something, so you write all the tests to ignore small discrepancies. This can lead to overlooking certain real errors that happen to result in small discrepancies. With integer amounts, these discrepancies have nowhere to hide. It sometimes happened that we would write some test and the money amount at the end would be wrong by 2m¢. Had we been using floats, we might have shrugged and attributed this to incomprehensible roundoff error. But with integers, that is a difference of 2, and you cannot shrug it off. There is no incomprehensible roundoff error. All the calculations are exact, and if some integer is off by 2 it is for a reason.
The point is that the difference between doubles and decimal types is mostly quantitative, not qualitative.
You still have (1 / 3) * 3 =/= 1 when you use decimals, and you still have (bignum + smallnum) - bignum = 0 when smallnum < 1 and bignum is large enough. There’s just a lot more precision to hide this, and there’s the fact that decimal numbers which are not very big can be represented exactly in a decimal data type.
This blogpost by Marc Dominus is severely misguided (to say the least). It speaks of floating point numbers as if they were inevitably imprecise and carried a random, impossible to know error. This is not the case. Floating point numbers are exact rational numbers and their operations are completely deterministic, and quite sound (When the result of an operation between floating point numbers can be exact, it is. Otherwise, the error of the result is as small as possible, by definition). There is nothing “incomprehensible” in the way floating point numbers work. As he admits in the beginning of his discussion, he does not really understand floating point numbers nor their purpose, thus he prefers to avoid them. While this is a legitimate position, I do not find much value in following him or promoting his obscurantist proposals.
To be fair, he’s writing in the context of designing software for financial applications. Which is what’s under discussion here too .And all that’s suggested is to use integers for currency, instead of FP or decimals.
Actuarial models. I worked on one. Sadly, it used decimals instead of doubles (not my decision) which made the whole thing quite slow. Fluctuations of a couple percent were very acceptable in that model, so there was absolutely not need to use decimals.
Yep. And not just actuarial models. Many financial valuation models are intended as approximations, and so in these cases decimals are unnecessary. And when many numbers are crunched, which happens in certain kinds of calculations, you can see a slowdown if you are using decimals.
Can does not equate to should. As in most things there are trade-offs and if you weigh accuracy, consistency, speed and come up with floats fitting your use case, then good for you. Personally I am not a fan of any kind of numbers that isn’t always associative and equality is “problematic”, just because I don’t want do have that in my head at the same time that I’m worried about other domain problems.
If you want inaccurate and not quite correct rounding of currency, surely counting integer number of pennies is strictly better than float number of dollars?
Of course, as there are various standards for rounding money, you probably should use a localized library for money, though.
You could do that, but it’s error-prone, complicated, and probably unnecessary.
Why should a maintenance programmer have to deal with getting the rounding right?
This is a particularly bad idea if your program supports plugins - Java, GIMP, or VirtualDub.
My summary: if you are doing some math outside the field of engineering, use a decimal type.
What kind of financial math doesn’t have to be accurate?
Some kind of estimation?
I’m try to figure out why this guy is fixated on using floating point. Also hope junior devs don’t read this.
His argument is that you have to do corrections for accurate floating point, but you have to do corrections with fixed point and integers anyway if you’re dealing with taxes or percentages. So using floating point isn’t meaningfully worse.
As for why, isn’t it interesting to analyze conventional wisdom?
I’d prefer more rigor than
Compare this to Marc Jason Dominus’ discussion on the same topic (https://blog.plover.com/prog/Moonpig.html#fp-sucks):
The point is that the difference between doubles and decimal types is mostly quantitative, not qualitative.
You still have (1 / 3) * 3 =/= 1 when you use decimals, and you still have (bignum + smallnum) - bignum = 0 when smallnum < 1 and bignum is large enough. There’s just a lot more precision to hide this, and there’s the fact that decimal numbers which are not very big can be represented exactly in a decimal data type.
This blogpost by Marc Dominus is severely misguided (to say the least). It speaks of floating point numbers as if they were inevitably imprecise and carried a random, impossible to know error. This is not the case. Floating point numbers are exact rational numbers and their operations are completely deterministic, and quite sound (When the result of an operation between floating point numbers can be exact, it is. Otherwise, the error of the result is as small as possible, by definition). There is nothing “incomprehensible” in the way floating point numbers work. As he admits in the beginning of his discussion, he does not really understand floating point numbers nor their purpose, thus he prefers to avoid them. While this is a legitimate position, I do not find much value in following him or promoting his obscurantist proposals.
To be fair, he’s writing in the context of designing software for financial applications. Which is what’s under discussion here too .And all that’s suggested is to use integers for currency, instead of FP or decimals.
Actuarial models. I worked on one. Sadly, it used decimals instead of doubles (not my decision) which made the whole thing quite slow. Fluctuations of a couple percent were very acceptable in that model, so there was absolutely not need to use decimals.
Yep. And not just actuarial models. Many financial valuation models are intended as approximations, and so in these cases decimals are unnecessary. And when many numbers are crunched, which happens in certain kinds of calculations, you can see a slowdown if you are using decimals.
Pulling a Superman III?
I give this to the junior devs as antidote: http://beza1e1.tuxen.de/no_real_numbers.html
Can does not equate to should. As in most things there are trade-offs and if you weigh accuracy, consistency, speed and come up with floats fitting your use case, then good for you. Personally I am not a fan of any kind of numbers that isn’t always associative and equality is “problematic”, just because I don’t want do have that in my head at the same time that I’m worried about other domain problems.
Decimal math libraries exist for a reason[0].
0: http://speleotrove.com/decimal/
If you want inaccurate and not quite correct rounding of currency, surely counting integer number of pennies is strictly better than float number of dollars?
Of course, as there are various standards for rounding money, you probably should use a localized library for money, though.