Yeah, and Python, Julia’s language of choice, has about the world’s only easily accessible implementation of IEEE 754 decimals. Little known fact, Python’s Decimal class is IEEE 754-compliant arithmetic!
I mean it solves this loosely. The places where decimal vs. non-decimal matters - certainly where this seems to come up - are generally places where I would question the use of floating vs fixed point (of any or arbitrary precision).
Base 10 only resolves the multiples of 1/10 that binary can’t represent, but it still can’t represent 1/3, so it seems like base 30 would be better as it can also accurately represent 1/3, 1/6, in addition to 1/2, 1/5, and 1/10. Supporting this non binary format necessarily results in slower operations.
Interestingly to avoid a ~20% reduction in precision the decimal ieee754 actually works in base 1000.
I have yet to see a currency that is not expressed in the decimal system.
I have yet to see an order form that does not take its quantities in the decimal system.
Yes, which is my point, there are lots of systems for which base 10 is good for humans, but that floating point in any base is inappropriate.
In fact, if there’s any type that we do not need, it’s binary floating point, i.e. what programmers strangely call “float” and “double”.
Every use case for floating point requires speed and accuracy. Every decimal floating point format is significantly more expensive to implement in hardware area, and is necessarily slower than binary floating point. The best case we have for accuracy is ieee754’s packed decimal (or compressed? I can’t recall exactly) which takes a 2.3% hit to precision, but is even slower than the basic decimal form which takes a 20% precision hit.
For real applications the operations being performed typically cannot be exactly represented in base 10 (or 1000) or base 2, so the belief that base 10 is “better” is erroneous. It is only a very small set of cases where a result would be exactly representable in base 10 where this comes up. If the desire is simply “be correct according to my intuition” then a much better format would be base-30, which can also represent 1/(3^n) correctly. But the reality is that the average precision is necessarily lower than base-2 for every non-power of 2 base, and the performance will be slower.
Floating point is intended for scientific and similar operations which means it needs to be as fast as possible, with as much precision as possible.
Places where human decimal behaviour is important are almost universally places where floating point is wrong: people don’t want their bank or order systems doing maths that says x+y==x when y is not zero, which is floating point does. That’s because people are dealing with quantities that generally have a minimum fractional quantity. Once you recognize that, your number format should become an integer count of that minimum quantity.
Yes, for currencies, you can use integers. Who would want to say x * 1.05 when they could say multFixPtDec(x, 105, 2);
To some extent, this is why we use standards like IEEE 754. Some of us remember the bad old days, when every CPU had a different way of dealing with things. 80 bit floats for example. Packed and unpacked decimal types on x86 for example. Yay, let’s have every application solve this in its own unique way!
Or maybe instead, let’s just use the standard IEEE 754 type that was purpose-built to hold decimal values without shitting itself 🤷♂️
[minor edit: I just saw both my wall of text replies were to u/cpurdy which I didn’t notice. This isn’t meant to have been a series of “target cpurdy” comments]
Yes, for currencies, you can use integers. Who would want to say x * 1.05 when they could say multFixPtDec(x, 105, 2);
I mean, sure if you have a piss poor language that doesn’t let you define a currency quantity it will be annoying. It sounds like a poor language choice if you writing something that is intended to handle money, but more importantly, using floating point for currency is going to cause much bigger problems.
And this has nothing to do with ieee754, that is merely a specific standard detailing how the storage bits for the format work, the issue is fundamental to any floating point format: floating point is not appropriate to anything where use are expecting exact quantities to be maintained (currencies, order quantities, etc) and it will bite you.
Some of us remember the bad old days, when every CPU had a different way of dealing with things. 80 bit floats for example.
So as a heads up assuming you’re complaining about x87’s 80bit floats: those are ieee754 floating point, and are the reason ieee754 exists: every other manufacturer said the ieee754 could not be implemented efficiently until intel went and produced it. The only issue is that being created before finalization of the ieee754 specification it uses an explicit 1-bit which turns out to be a mistake.
Packed and unpacked decimal types on x86 for example.
You’ll be pleased to know ieee754’s decimal variant has packed and unpacked decimal formats - unpacked taking a 20% precision hit but being implementable in software without being catastrophically slow, and packed having only a 2.3% precision hit but being pretty much hardware only (though to be clear as I’ve said elsewhere, still significantly and necessarily slower than binary floating point)
Or maybe instead, let’s just use the standard IEEE 754 type that was purpose-built to hold decimal values without shitting itself 🤷♂️
If you are hell bent on using an inappropriate format for your data then maybe decimal is better, but you went wrong when you started using a floating point representation for values that don’t have significant dynamic range where gaining and adding value due to precision limits is not acceptable.
[minor edit: I just saw both my wall of text replies were to u/cpurdy which I didn’t notice. This isn’t meant to have been a series of “target cpurdy” comments]
No worries. I’m not feeling targeted.
I mean, sure if you have a piss poor language that doesn’t let you define a currency quantity it will be annoying.
C. C++. Java. JavaScript.
Right there we have 95% of the applications in the world. 🤷♂️
How about newer languages with no decimal support? Hmm … Go. Rust.
And this has nothing to do with ieee754
Other than it actually specifies a standard binary format, operations, and defined behaviors thereof for decimal numbers.
So as a heads up assuming you’re complaining about x87’s 80bit floats: those are ieee754 floating point
Yes, there are special carve-outs (e.g. defining “extended precision format”) in IEEE754 to allow 8087 80-bit floats to be legal. That’s not surprising, since Intel was significantly involved in writing the IEEE754 spec.
ieee754’s decimal variant has packed and unpacked decimal formats - unpacked taking a 20% precision hit but being implementable in software without being catastrophically slow, and packed having only a 2.3% precision hit but being pretty much hardware only
I’ve implemented IEEE754 decimal with both declet and binary encoding in the past. Both formats have the same ranges, so there is no “precision hit” or “precision difference”. I’m not sure what you mean by packed vs unpacked; that seems to be a reference to the ancient 8086 instruction set, which supported both packed (nibble) and unpacked (byte) decimal arithmetic. (I used both, in x86 assembly, but probably not in the last 30 years.)
you went wrong when you started using a floating point representation for values that don’t have significant dynamic range where gaining and adding value due to precision limits is not acceptable
I really do not understand this. It is true that IEEE754 floating point is very good large dynamic ranges, but that does not mean that it should only be used for values with a large dynamic range. In fact, quite often IEEE754 is used to deal with values limited between zero and one 🤷♂️
How about newer languages with no decimal support? Hmm … Go. Rust.
You can also do similar in rust. I did not say “has a built in currency type”.
You can also add one to python, or a variety of other languages. I’m only partially surprised that Java still doesn’t provide support for operator overloading.
And this has nothing to do with ieee754
Other than it actually specifies a standard binary format, operations, and defined behaviors thereof for decimal numbers.
No. It defines the operations on floating point numbers. Which is a specific numeric structure, and as I said one that is inappropriate for the common cases where people are super concerned about handling 1/(10^n) accurately.
I’ve implemented IEEE754 decimal with both declet and binary encoding in the past. Both formats have the same ranges, so there is no “precision hit” or “precision difference”. I’m not sure what you mean by packed vs unpacked; that seems to be a reference to the ancient 8086 instruction set, which supported both packed (nibble) and unpacked (byte) decimal arithmetic.
I had to go back and re-read the spec, I misunderstood the two significand encodings. derp. I assumed your reference to the packed and unpacked was those.
On the plus side, this means that you’re only throwing out 2% of precision for both forms.
I really do not understand this. It is true that IEEE754 floating point is very good large dynamic ranges, but that does not mean that it should only be used for values with a large dynamic range.
No, I mean the kind of things that people care about/need accurate representation over multiples 1/(10^n) do not have dynamic range, fixed/no-point are the correct representation. So optimizing the floating point format for fixed point data, instead of the actual use cases that have widely varying ranges (scientific computation, graphics, etc)
In fact, quite often IEEE754 is used to deal with values limited between zero and one 🤷♂️
There is a huge dynamic range between 0 and 1. The entire point of floating point is that all numbers can be represented as a value between [1..Base) with a dynamic range. The point I am making is that the examples where decimal formats is valuable do not need that at all.
What is the multiplication supposed to represent? Are you adding a 5% fee? You need to round the value anyway, the customer isn’t going to give you 3.1395 dollars. And what if the fee was 1/6 of the price? Decimals aren’t going to help you there.
It never ceases to amaze me how many people really work hard to avoid obvious, documented, standardized solutions to problems when random roll-your-own solutions can be tediously written, incrementally-debugged, and forever-maintained instead.
Help me understand why writing your own decimal support is superior to just using the standard decimal types?
I’m going to go out on a limb here and guess that you don’t write your own “int”, “float”, and “double”. Why is decimal any different?
This whole conversation seems insane to me. But I recognize that maybe I’m the one who is insane, so please explain it to me.
No, I’m saying that you don’t need a decimal type at all. If you need to represent an integral value, use an integer. If you want to represent an approximation of a real number, use a float. What else would you want to represent?
I would like to have a value that is a decimal value. I am not the only developer who has needed to do this. I have needed it many times in financial services applications. I have needed it many times in ecommerce applications. I have needed it many times in non-financial business applications. This really is not a crazy or rare requirement. Again, why would you want to use a type that provides an approximation of the desired value, when you could just use a type that actually holds the desired value? I’m not talking crazy, am I?
What do you mean by “a decimal value”? That’s not an established mathematical term. If you mean any number that can be expressed as m/10ⁿ for some integers m, n, you need to explain precisely why you’d want to use that in a real application. If you mean any number that can be expressed as m/10ⁿ forsome integer m and a fixed integer n, why not just use an integer?
Being able to say x * 1.05 isn’t a property of the type itself, it’s just language support. If your language supports operator overloading you could use that syntax for fixed point too.
Oh, you are using a language with fixed point literals? I have (in the past). I know that C#/VB.NET has its 128-bit non-standard floating point decimal type, so you’re not talking about that. Python has some sort of fixed point decimal support (and also floating point decimal). What language are you referring to?
Oh, you are using a language with fixed point literals?
You don’t need to. Strings are a good substitute
For Kotlin it doesn’t really even matter what the left operand is
fun main() {
println("1.05" * 3)
}
operator fun String.times(right_operand: Int): FixedDecimal {
// Do math
return FixedDecimal(); // Return placeholder
}
class FixedDecimal;
So your idea is to write your own custom decimal type? And that is somehow better than using an international well-established standard IEEE-754?
I think Kotlin is a nice language, and it’s cool that it allows you to write new classes, but being forced to build your own basic data types (”hey look ma! I invented a character string!”) seems a little crazy to me 🤷♂️
The idea is that the type represents an underlying standard as well as its defined operations. You don’t need native support for a standard in order to support said standard
Edit:
but being forced to build your own basic data types
I was giving an example about ergonomics and language support rather than using an opaque dependency
Above we did the calculation in decimal, because that’s a little more intuitive to read.
Hard disagree. The biggest problem people have with floats is that it’s binary arithmetic displayed in decimal. Most people aren’t even actively aware that the arithmetic is in binary.
Julia needed 57 decimal digits to show the exact value of her floats (although she tried to display 80, more than needed). In reality, they only have 52 bits in the significand (plus one leading implicit bit that is always lit except for 0.0f, of course). She needed more space and more different kinds of symbols to exactly represent what can be represented with 52 zeros and ones.
Plus, adding in binary is a lot easier than in decimal! It’s just XOR with carry. It also makes it easier to see what to round to! Always round to zero, because that’s the only even one.
For anybody expecting this to be the usual “BeCaUsE It’s fLoAtInG PoInT!” explanation, the article goes much much deeper:
Since 2008, there has been an IEEE 754 standard for decimal floating point values, which fixes this.
The fundamental problem illustrated here is that we are still using binary floating point values to represent (i.e. approximate) decimal values.
Yeah, and Python, Julia’s language of choice, has about the world’s only easily accessible implementation of IEEE 754 decimals. Little known fact, Python’s
Decimal
class is IEEE 754-compliant arithmetic!I was flabbergasted to know that Julia is not Julia’s language of choice.
Cool! I didn’t realize that :)
Ecstasy’s decimal types are all built around the IEEE 754 spec as well. Not 100% implemented at this point, though.
Is this expected?
If you want to create a literal Decimal, pass a string:
When you pass a float, you’re losing information before you do any arithmetic:
The problem is that
0.1
is not one tenth–it’s some other number very close:Whereas if you create a Decimal from a string, the Decimal constructor can see the actual digits and represent it correctly:
I mean it solves this loosely. The places where decimal vs. non-decimal matters - certainly where this seems to come up - are generally places where I would question the use of floating vs fixed point (of any or arbitrary precision).
Base 10 only resolves the multiples of 1/10 that binary can’t represent, but it still can’t represent 1/3, so it seems like base 30 would be better as it can also accurately represent 1/3, 1/6, in addition to 1/2, 1/5, and 1/10. Supporting this non binary format necessarily results in slower operations.
Interestingly to avoid a ~20% reduction in precision the decimal ieee754 actually works in base 1000.
“Base 10 only resolves the multiples of 1/10 that binary can’t represent”
That is quite convenient, since humans almost always work in decimals.
I have yet to see a currency that is not expressed in the decimal system.
I have yet to see an order form that does not take its quantities in the decimal system.
In fact, if there’s any type that we do not need, it’s binary floating point, i.e. what programmers strangely call “float” and “double”.
Yes, which is my point, there are lots of systems for which base 10 is good for humans, but that floating point in any base is inappropriate.
Every use case for floating point requires speed and accuracy. Every decimal floating point format is significantly more expensive to implement in hardware area, and is necessarily slower than binary floating point. The best case we have for accuracy is ieee754’s packed decimal (or compressed? I can’t recall exactly) which takes a 2.3% hit to precision, but is even slower than the basic decimal form which takes a 20% precision hit.
For real applications the operations being performed typically cannot be exactly represented in base 10 (or 1000) or base 2, so the belief that base 10 is “better” is erroneous. It is only a very small set of cases where a result would be exactly representable in base 10 where this comes up. If the desire is simply “be correct according to my intuition” then a much better format would be base-30, which can also represent 1/(3^n) correctly. But the reality is that the average precision is necessarily lower than base-2 for every non-power of 2 base, and the performance will be slower.
Floating point is intended for scientific and similar operations which means it needs to be as fast as possible, with as much precision as possible.
Places where human decimal behaviour is important are almost universally places where floating point is wrong: people don’t want their bank or order systems doing maths that says x+y==x when y is not zero, which is floating point does. That’s because people are dealing with quantities that generally have a minimum fractional quantity. Once you recognize that, your number format should become an integer count of that minimum quantity.
For currencies, you can just use integers, floats are not meant for that anyway. Binary is the most efficient to evaluate on a computer.
Yes, for currencies, you can use integers. Who would want to say
x * 1.05
when they could saymultFixPtDec(x, 105, 2);
To some extent, this is why we use standards like IEEE 754. Some of us remember the bad old days, when every CPU had a different way of dealing with things. 80 bit floats for example. Packed and unpacked decimal types on x86 for example. Yay, let’s have every application solve this in its own unique way!
Or maybe instead, let’s just use the standard IEEE 754 type that was purpose-built to hold decimal values without shitting itself 🤷♂️
[minor edit: I just saw both my wall of text replies were to u/cpurdy which I didn’t notice. This isn’t meant to have been a series of “target cpurdy” comments]
I mean, sure if you have a piss poor language that doesn’t let you define a currency quantity it will be annoying. It sounds like a poor language choice if you writing something that is intended to handle money, but more importantly, using floating point for currency is going to cause much bigger problems.
And this has nothing to do with ieee754, that is merely a specific standard detailing how the storage bits for the format work, the issue is fundamental to any floating point format: floating point is not appropriate to anything where use are expecting exact quantities to be maintained (currencies, order quantities, etc) and it will bite you.
So as a heads up assuming you’re complaining about x87’s 80bit floats: those are ieee754 floating point, and are the reason ieee754 exists: every other manufacturer said the ieee754 could not be implemented efficiently until intel went and produced it. The only issue is that being created before finalization of the ieee754 specification it uses an explicit 1-bit which turns out to be a mistake.
You’ll be pleased to know ieee754’s decimal variant has packed and unpacked decimal formats - unpacked taking a 20% precision hit but being implementable in software without being catastrophically slow, and packed having only a 2.3% precision hit but being pretty much hardware only (though to be clear as I’ve said elsewhere, still significantly and necessarily slower than binary floating point)
If you are hell bent on using an inappropriate format for your data then maybe decimal is better, but you went wrong when you started using a floating point representation for values that don’t have significant dynamic range where gaining and adding value due to precision limits is not acceptable.
No worries. I’m not feeling targeted.
C. C++. Java. JavaScript.
Right there we have 95% of the applications in the world. 🤷♂️
How about newer languages with no decimal support? Hmm … Go. Rust.
Other than it actually specifies a standard binary format, operations, and defined behaviors thereof for decimal numbers.
Yes, there are special carve-outs (e.g. defining “extended precision format”) in IEEE754 to allow 8087 80-bit floats to be legal. That’s not surprising, since Intel was significantly involved in writing the IEEE754 spec.
I’ve implemented IEEE754 decimal with both declet and binary encoding in the past. Both formats have the same ranges, so there is no “precision hit” or “precision difference”. I’m not sure what you mean by packed vs unpacked; that seems to be a reference to the ancient 8086 instruction set, which supported both packed (nibble) and unpacked (byte) decimal arithmetic. (I used both, in x86 assembly, but probably not in the last 30 years.)
I really do not understand this. It is true that IEEE754 floating point is very good large dynamic ranges, but that does not mean that it should only be used for values with a large dynamic range. In fact, quite often IEEE754 is used to deal with values limited between zero and one 🤷♂️
C++:
You can also do similar in rust. I did not say “has a built in currency type”.
You can also add one to python, or a variety of other languages. I’m only partially surprised that Java still doesn’t provide support for operator overloading.
No. It defines the operations on floating point numbers. Which is a specific numeric structure, and as I said one that is inappropriate for the common cases where people are super concerned about handling 1/(10^n) accurately.
I had to go back and re-read the spec, I misunderstood the two significand encodings. derp. I assumed your reference to the packed and unpacked was those.
On the plus side, this means that you’re only throwing out 2% of precision for both forms.
No, I mean the kind of things that people care about/need accurate representation over multiples 1/(10^n) do not have dynamic range, fixed/no-point are the correct representation. So optimizing the floating point format for fixed point data, instead of the actual use cases that have widely varying ranges (scientific computation, graphics, etc)
There is a huge dynamic range between 0 and 1. The entire point of floating point is that all numbers can be represented as a value between [1..Base) with a dynamic range. The point I am making is that the examples where decimal formats is valuable do not need that at all.
What is the multiplication supposed to represent? Are you adding a 5% fee? You need to round the value anyway, the customer isn’t going to give you 3.1395 dollars. And what if the fee was 1/6 of the price? Decimals aren’t going to help you there.
It never ceases to amaze me how many people really work hard to avoid obvious, documented, standardized solutions to problems when random roll-your-own solutions can be tediously written, incrementally-debugged, and forever-maintained instead.
Help me understand why writing your own decimal support is superior to just using the standard decimal types?
I’m going to go out on a limb here and guess that you don’t write your own “int”, “float”, and “double”. Why is decimal any different?
This whole conversation seems insane to me. But I recognize that maybe I’m the one who is insane, so please explain it to me.
No, I’m saying that you don’t need a decimal type at all. If you need to represent an integral value, use an integer. If you want to represent an approximation of a real number, use a float. What else would you want to represent?
I would like to have a value that is a decimal value. I am not the only developer who has needed to do this. I have needed it many times in financial services applications. I have needed it many times in ecommerce applications. I have needed it many times in non-financial business applications. This really is not a crazy or rare requirement. Again, why would you want to use a type that provides an approximation of the desired value, when you could just use a type that actually holds the desired value? I’m not talking crazy, am I?
What do you mean by “a decimal value”? That’s not an established mathematical term. If you mean any number that can be expressed as m/10ⁿ for some integers m, n, you need to explain precisely why you’d want to use that in a real application. If you mean any number that can be expressed as m/10ⁿ forsome integer m and a fixed integer n, why not just use an integer?
My proposal is that we switch to a base 30 floating point format, and that could handle a 1/6th fee :D :D :D
You’re almost there. https://en.wikipedia.org/wiki/Sexagesimal
Being able to say
x * 1.05
isn’t a property of the type itself, it’s just language support. If your language supports operator overloading you could use that syntax for fixed point too.Oh, you are using a language with fixed point literals? I have (in the past). I know that C#/VB.NET has its 128-bit non-standard floating point decimal type, so you’re not talking about that. Python has some sort of fixed point decimal support (and also floating point decimal). What language are you referring to?
You don’t need to. Strings are a good substitute
For Kotlin it doesn’t really even matter what the left operand is
https://pl.kotl.in/7FDdqQdSo
So your idea is to write your own custom decimal type? And that is somehow better than using an international well-established standard IEEE-754?
I think Kotlin is a nice language, and it’s cool that it allows you to write new classes, but being forced to build your own basic data types (”hey look ma! I invented a character string!”) seems a little crazy to me 🤷♂️
The idea is that the type represents an underlying standard as well as its defined operations. You don’t need native support for a standard in order to support said standard
Edit:
I was giving an example about ergonomics and language support rather than using an opaque dependency
Hard disagree. The biggest problem people have with floats is that it’s binary arithmetic displayed in decimal. Most people aren’t even actively aware that the arithmetic is in binary.
Julia needed 57 decimal digits to show the exact value of her floats (although she tried to display 80, more than needed). In reality, they only have 52 bits in the significand (plus one leading implicit bit that is always lit except for
0.0f
, of course). She needed more space and more different kinds of symbols to exactly represent what can be represented with 52 zeros and ones.Plus, adding in binary is a lot easier than in decimal! It’s just
XOR
with carry. It also makes it easier to see what to round to! Always round to zero, because that’s the only even one.