Yeah, the conclusion of the article is that yes, you can trust floating point on Apple Silicon. Specifically, you can trust it to implement IEEE floating point arithmetic, which is exactly what you would expect.

I… feel like it would be much more useful to take the M1 through an IEEE floating point test suite?

I mean, great, we now know that the M1 and Intel gets the same results on three arbitrarily chosen tiny tests. The article doesn’t say whether the results are correct according to IEEE or not, but I would guess that they are. That doesn’t really tell us much.

Also, is anyone actually expecting Apple to have released a CPU with a faulty FPU which doesn’t implement IEEE floating point correctly?

Or does the whole article just stem from a misunderstanding of floating point? I kind of get the feeling that this is written by someone who’s just kind of scared of floating point; someone who just knows that floating point calculations have “errors” without knowing that there’s actually a standard which specifies exactly what an FPU should do.

it would be much more useful to take the M1 through an IEEE floating point test suite?

Agreed.

but I would guess that they are

They are. They are somewhat well known examples of how naively implementing certain algorithms will get you results that do no in the least converge to the known analytical solutions, while you would naively expect them to do so.

is anyone actually expecting Apple to have released a CPU with a faulty FPU

Well, there was the intel fdiv bug after all? And there are microcode bugs and so forth? So it’s not out of the realm of possibility TBH.

Or does the whole article just stem from a misunderstanding of floating point?

I don’t think someone that doesn’t understand floating point would reference a “Handbook of Floating-Point Arithmetic”. I do agree it’s not a very good article though.

Floating point test suites are annoyingly hard to find. For some operations you can brute force 32bit, but for accurate coverage there are unending edgecases

I fail to understand the point of this article. Aren’t both x86 and ARM floating point just implementations of IEEE754, and therefore should result in the same values? Or am I missing something? I see mort and olliej raised the same point.

So… apparently, you can trust that Apple Silicon is bug-compatible with Intel processors? Or is this a case where IEEE requires “incorrect” results for floating-point math?

I’m pretty sure these results are expected based on the floating point standard. They’re mathematically incorrect because of limitations of the standard and the fact that you’re effectively rounding on each step.

Still pretty annoying how “incorrect results” gets thrown around without any explanation or qualification. It feels more like a click-bait for (Apple) commenters to dunk on Intel … which is exactly what seems to happen in the comments already:

this somehow feels like a deliberate decision on Apple’s part. I mean replicating Intel’s errors on a very different architecture.

Well, ‘mathematically incorrect’ again suggests these results are incorrect or that what happens is somehow not ‘mathematical’. I think such terminology muddies the waters. These results are not in the least incorrect. They are exactly what a calculation using mathematically well-defined finite precision representations of numbers, as specified in IEEE754, should result in.

What’s incorrect is the expectation that such algorithms should converge to the analytical solution.

Please consult Kahan on the original context of these traps. Kahan’s claim is that we must do error analysis if we want to understand the results that we get from IEEE 754 algorithms.

This article went into whether R would potentially break due to FPU incompatibility between ARM and x86. The answer is it probably will be fine. ARM has a special FPU mode that introduces some incompatibilities but it is turned off in MacOS. On the other hand, x86 also has its own special FPU features, which can be turned off too.

No.

Any representation using a finite number of bits will have subtle edge cases and rounding problems. This doesn’t make it untrustworthy.

Yeah, the conclusion of the article is that yes, you can trust floating point on Apple Silicon. Specifically, you can trust it to implement IEEE floating point arithmetic, which is exactly what you would expect.

Thank you for defusing the clickbait headline.

Bah. Y’all know what I mean.

[Comment removed by author]

I… feel like it would be much more useful to take the M1 through an IEEE floating point test suite?

I mean, great, we now know that the M1 and Intel gets the same results on three arbitrarily chosen tiny tests. The article doesn’t say whether the results are correct according to IEEE or not, but I would guess that they are. That doesn’t really tell us much.

Also, is anyone actually expecting Apple to have released a CPU with a faulty FPU which doesn’t implement IEEE floating point correctly?

Or does the whole article just stem from a misunderstanding of floating point? I kind of get the feeling that this is written by someone who’s just kind of scared of floating point; someone who just knows that floating point calculations have “errors” without knowing that there’s actually a standard which specifies exactly what an FPU should do.

Agreed.

They are. They are somewhat well known examples of how naively implementing certain algorithms will get you results that do no in the least converge to the known analytical solutions, while you would naively expect them to do so.

Well, there was the intel fdiv bug after all? And there are microcode bugs and so forth? So it’s not out of the realm of possibility TBH.

I don’t think someone that doesn’t understand floating point would reference a “Handbook of Floating-Point Arithmetic”. I do agree it’s not a very good article though.

Floating point test suites are annoyingly hard to find. For some operations you can brute force 32bit, but for accurate coverage there are unending edgecases

I fail to understand the point of this article. Aren’t both x86 and ARM floating point just implementations of IEEE754, and therefore should result in the same values? Or am I missing something? I see mort and olliej raised the same point.

Yeah, this article is kind of like notifying the team that a unit test passed. “OK, thanks Norbert, we were kind of expecting that.”

So… apparently, you can trust that Apple Silicon is bug-compatible with Intel processors? Or is this a case where IEEE requires “incorrect” results for floating-point math?

I’m pretty sure these results are expected based on the floating point standard. They’re mathematically incorrect because of limitations of the standard and the fact that you’re effectively rounding on each step.

Still pretty annoying how “incorrect results” gets thrown around without any explanation or qualification. It feels more like a click-bait for (Apple) commenters to dunk on Intel … which is exactly what seems to happen in the comments already:

I believe Apple (and other recent Arm) FPUs can generate three kinds of incorrect results:

Well, ‘mathematically incorrect’ again suggests these results are incorrect or that what happens is somehow not ‘mathematical’. I think such terminology muddies the waters. These results are not in the least incorrect. They are exactly what a calculation using mathematically well-defined finite precision representations of numbers, as specified in IEEE754, should result in.

What’s incorrect is the expectation that such algorithms should converge to the analytical solution.

Please consult Kahan on the original context of these traps. Kahan’s claim is that we must do error analysis if we want to understand the results that we get from IEEE 754 algorithms.

This article went into whether R would potentially break due to FPU incompatibility between ARM and x86. The answer is it probably will be fine. ARM has a special FPU mode that introduces some incompatibilities but it is turned off in MacOS. On the other hand, x86 also has its own special FPU features, which can be turned off too.

Does R use long double? if it does then x86 is the only platform where long double can be the 80 bit ieee754 standard

Ah the problem is FORTRAN. Happily gfortran runs fine under Rosetta, and so does R, so people aren’t blocked for now

The tldr is yes, as long as you understand that floating point doesn’t have infinite precision, and ieee754 has specific required behaviour