This article is amusing. Given sufficient information, yes, these guidelines myths, are incorrect.

1. They are not exact

Article claims False because they are exact within their capabilities. A sufficiently knowledgeable person can avoid the sharp edges like 0.1 and write safe code.

Or just treat them as not exact.

2. They are non-deterministic

Article claims False. But you need to know the implementation details of your computation, including device designer, so you can treat them as deterministic.

That was part of your perfect knowledge implant, right?

3. NaN and INF are an indication of an error

Article claims False. You expect them as safeguards for your perfect knowledge.

All these points are the same: “It’s not true that [X]!” and then it continues to say “It is true that [X] under condition [Y]”. Ehm…

Some decent enough content nonetheless though, and I think especially some junior programmers could benefit as “floats are not exact” is lacking in nuance. Just a shame of the contrarian clickbaity style. Don’t know who flagged it as spam, because it’s not really IMHO.

They are 100% deterministic, IEEE specifies exact behaviour, and that does not vary between implementations. If you go into undefined behaviour (e.g. large values in the transcendentals) then your results may not be great, but the same is true of integers.

3. NaN and INF are indication of an error

What? you’re doing arithmetic, I assume you know what arithmetic does? You don’t get NaN except whether arithmetic makes no sense (so arguably an error), but those errors are the same as you get doing the maths on paper. Similarly the places you get +/- infinity are 90% the same as real math. The occasions that it isn’t true are overflow, and that happens with integers as well.

They are 100% deterministic, IEEE specifies exact behaviour

This is 100% true if your compiler and architecture are 100% IEEE compliant. If anything has enabled things like -ffast-math or if you are targeting 32-bit x86 and not SSE (for example) then you may find that some steps of a calculation are not actually IEEE compliant. -ffast-math enables things like fused multiply-add operations, which are individually deterministic but will give different precision to separate multiple and add operations. Most of the math.h functions have different implementations on different platforms with varying performance/precision trade-offs, so just because the primitive operations are well-specified doesn’t mean that more complex things are.

I disagree. It’s just different levels of knowledge. This article is addressing the overcaution that can come with limited knowledge — like the child who’s learned “don’t touch knives: they’ll cut you!”, and now needs to learn that this is … not exactly a myth, but something you need to partly unlearn to make use of knives.

The point about exactness is very important and helps you use floats optimally. JavaScript is a great example: it makes the language and runtime simpler not to have to use both int and float representations for numbers.

The point about INF is a good one in some cases where you can use INF as a limiting value, like when finding a max/min. The fact that NaNs propagate is also valuable.

The one about nondeterminism is less useful. Maybe it doesn’t belong here. I agree it’s good to assume that FP computations can in general vary in their LSBs for weird and unpredictable reasons.

There are two interpretations of floating point numbers:

They are inexact quantities which we perform exact operations on.

They are exact quantities which we perform inexact operations on.

The jury is still out on which of these is the best way to think about the problem, and I expect that it is domain-specific. To say ‘just treat them as not exact’ is overly simplistic; we have a choice of whether to consider our operations or values as inexact.

I believe 2 is wrong in the article; IIRC for some operations, the IEEE standard allows for the hardware to choose to do a fast-math optimization (or not) depending on runtime things like thread contention, the need to cache values, stashing registers into the stack, etc, and there are (were?) architectures that had a runtime nondeterminism in the result, even on the same hardware/compiler combo.

No, ieee754 allows some functions to be inaccurate because correct rounding may not be possible. In principle this only impacts the transcendental functions, and the most widespread hardware implementation is x87 which turns out to be inaccurate outside of some range (I can’t recall what), and unintentionally inaccurate within the range. It doesn’t matter though as everyone uses software implementations as they work with floats and doubles (x87 is ieee874 but uses an 80bit format so double rounding can happen), so much so that modern hardware doesn’t support them natively

Remember, Intel had a rep on the 754 committee and very likely got some sketchy stuff rolled into the spec to be sure that x87 was compliant.
About midway through someone explains that the nondeterminism is allowed by the standard.

Anyways, even though it’s not “modern” there are plenty of things out there that are still using x87 so it may be important for someone to keep this in mind.

Intel doesn’t deserve to be dissed here, it was intel that made ieee754 happen by demonstrating that you could actually implement it efficiently.

The price they paid for that was they predated the implicit leading 1, meaning the lost a bit of precision in fp80, as well as suffering from pseudo- denormals, infinities, and nans, as well as unnormals. They also ended up with fprem (remainder), and fprem1 (remainder according to ieee754).

The reason no one uses x87 is because it’s dog slow, not because it’s inaccurate. There’s a huge market for low-accuracy high-bandwidth floating point operations.

correct rounding may not be possible […] transcendental functions

Isn’t this just a question of choosing branch cuts?

There are many reasons not to use x87. So many reasons. Scientific computing love it though as for them the extra precision matters (at least in a cargo cult fashion). Fun story because win64 defined long double == double specfp isn’t valid for comparisons between x86_64 systems, due to the use of long double in some tests. It also invalidates comparisons to arm (again no 80bit arithmetic)

Some transcendental functions round differently at the same precision depending on how far out you evaluate. There are other things that impact values like what polynomials you use in the approximation, etc

The second point is actually even more dire. IEEE 754 allows exp(), cos(), and other transcendental functions to be incorrectly rounded because they sometimes cannot be correctly rounded; IEEE 754 author Kahan calls this the “table-maker’s dilemma”.

Dumb questions: Why is it called the table maker’s dilemma? What does “table” mean in this context? What does “how much computation it would cost” mean?

This article is amusing. Given sufficient information, yes, these

~~guidelines~~myths, are incorrect.Article claims False because they are exact within their capabilities. A sufficiently knowledgeable person can avoid the sharp edges like 0.1 and write safe code.

Or just treat them as not exact.

Article claims False. But you need to know the implementation details of your computation, including device designer, so you can treat them as deterministic.

That was part of your perfect knowledge implant, right?

Article claims False. You expect them as safeguards for your perfect knowledge.

All these points are the same: “It’s not true that [X]!” and then it continues to say “It is true that [X] under condition [Y]”. Ehm…

Some decent enough content nonetheless though, and I think especially some junior programmers could benefit as “floats are not exact”

islacking in nuance. Just a shame of the contrarian clickbaity style. Don’t know who flagged it as spam, because it’s not really IMHO.They are 100% deterministic, IEEE specifies exact behaviour, and that does not vary between implementations. If you go into undefined behaviour (e.g. large values in the transcendentals) then your results may not be great, but the same is true of integers.

What? you’re doing arithmetic, I assume you know what arithmetic does? You don’t get NaN except whether arithmetic makes no sense (so arguably an error), but those errors are the same as you get doing the maths on paper. Similarly the places you get +/- infinity are 90% the same as real math. The occasions that it isn’t true are overflow, and that happens with integers as well.

This is 100% true if your compiler and architecture are 100% IEEE compliant. If anything has enabled things like

`-ffast-math`

or if you are targeting 32-bit x86 and not SSE (for example) then you may find that some steps of a calculation are not actually IEEE compliant.`-ffast-math`

enables things like fused multiply-add operations, which are individually deterministic but will give different precision to separate multiple and add operations. Most of the`math.h`

functions have different implementations on different platforms with varying performance/precision trade-offs, so just because the primitive operations are well-specified doesn’t mean that more complex things are.You don’t need fma if you don’t trust the compiler, a+b+c can produce different results depending on how the compiler chooses to add.

I disagree. It’s just different levels of knowledge. This article is addressing the overcaution that can come with limited knowledge — like the child who’s learned “don’t touch knives: they’ll cut you!”, and now needs to learn that this is … not exactly a myth, but something you need to partly unlearn to make use of knives.

The point about exactness is very important and helps you use floats optimally. JavaScript is a great example: it makes the language and runtime simpler not to have to use both int and float representations for numbers.

The point about INF is a good one in some cases where you can use INF as a limiting value, like when finding a max/min. The fact that NaNs propagate is also valuable.

The one about nondeterminism is less useful. Maybe it doesn’t belong here. I agree it’s good to assume that FP computations can in general vary in their LSBs for weird and unpredictable reasons.

There are two interpretations of floating point numbers:

They are inexact quantities which we perform exact operations on.

They are exact quantities which we perform inexact operations on.

The jury is still out on which of these is the best way to think about the problem, and I expect that it is domain-specific. To say ‘just treat them as not exact’ is overly simplistic; we have a choice of whether to consider our operations or values as inexact.

I believe 2 is wrong in the article; IIRC for some operations, the IEEE standard allows for the hardware to choose to do a fast-math optimization (or not) depending on runtime things like thread contention, the need to cache values, stashing registers into the stack, etc, and there are (were?) architectures that had a runtime nondeterminism in the result, even on the same hardware/compiler combo.

No, ieee754 allows some functions to be inaccurate because correct rounding may not be

possible. In principle this only impacts the transcendental functions, and the most widespread hardware implementation is x87 which turns out to be inaccurate outside of some range (I can’t recall what), and unintentionally inaccurate within the range. It doesn’t matter though as everyone uses software implementations as they work with floats and doubles (x87 is ieee874 but uses an 80bit format so double rounding can happen), so much so that modern hardware doesn’t support them nativelyYes, it was an x87 problem:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

Remember, Intel had a rep on the 754 committee and very likely got some sketchy stuff rolled into the spec to be sure that x87 was compliant. About midway through someone explains that the nondeterminism is allowed by the standard.

Anyways, even though it’s not “modern” there are plenty of things out there that are still using x87 so it may be important for someone to keep this in mind.

Intel doesn’t deserve to be dissed here, it was intel that made ieee754 happen by demonstrating that you could actually implement it efficiently.

The price they paid for that was they predated the implicit leading 1, meaning the lost a bit of precision in fp80, as well as suffering from pseudo- denormals, infinities, and nans, as well as unnormals. They also ended up with fprem (remainder), and fprem1 (remainder according to ieee754).

The reason no one uses x87 is because it’s dog slow, not because it’s inaccurate. There’s a huge market for low-accuracy high-bandwidth floating point operations.

Isn’t this just a question of choosing branch cuts?

There are many reasons not to use x87. So many reasons. Scientific computing love it though as for them the extra precision matters (at least in a cargo cult fashion). Fun story because win64 defined long double == double specfp isn’t valid for comparisons between x86_64 systems, due to the use of long double in some tests. It also invalidates comparisons to arm (again no 80bit arithmetic)

Some transcendental functions round differently at the same precision depending on how far out you evaluate. There are other things that impact values like what polynomials you use in the approximation, etc

The second point is actually even more dire. IEEE 754 allows

`exp()`

,`cos()`

, and other transcendental functions to be incorrectly rounded because they sometimes cannot be correctly rounded; IEEE 754 author Kahan calls this the “table-maker’s dilemma”.Dumb questions: Why is it called the table maker’s dilemma? What does “table” mean in this context? What does “how much computation it would cost” mean?

lookup tables; tradeoff between speed, computation, and memory.

Picolisp fixed-point numbers.