1. 15
  1.  

  2. 2

    I know that floating-point arithmetic is a bit crazy on modern computers. For example, floating-point numbers are not associative

    Interestingly, integers aren’t associative either: (a + b) - c might give different results than a + (b - c). Specifically, for some values of a, b and c, a + b might overflow, while a + (b - c) might not. (a=INT_MAX, b=1, c=1 is a trivial set of numbers where the first expression is UB while the second is well-defined.)

    Basically, computers are weird.

    1. 1

      What’s the right answer?

      • Refuse to compile?
      • Warn due to the range of the operands?
      • Emulate the highest precision supported by any target on the targets having the lowest-precision natively?
      • Dynamically choose a precision based on the run-time architecture? (How do you test that!?)
      • Surprise motherfucker?
      1. 1

        Note: “x86” here does actually mean i386 (where default flags can’t assume SSE) not amd64. gcc uses the stupid i386 FPU in default (?) 80 bit mode.