I know that floating-point arithmetic is a bit crazy on modern computers. For example, floating-point numbers are not associative

Interestingly, integers aren’t associative either: (a + b) - c might give different results than a + (b - c). Specifically, for some values of a, b and c, a + b might overflow, while a + (b - c) might not. (a=INT_MAX, b=1, c=1 is a trivial set of numbers where the first expression is UB while the second is well-defined.)

Interestingly, integers aren’t associative either:

`(a + b) - c`

might give different results than`a + (b - c)`

. Specifically, for some values of a, b and c,`a + b`

might overflow, while`a + (b - c)`

might not. (`a=INT_MAX, b=1, c=1`

is a trivial set of numbers where the first expression is UB while the second is well-defined.)Basically, computers are weird.

What’s the right answer?

Note: “x86” here does actually mean i386 (where default flags can’t assume SSE) not amd64. gcc uses the stupid i386 FPU in default (?) 80 bit mode.