1. 17
    1. 11

      Answer: no.

      1. 5

        Confirmed. It’s one of the things I know I can look up when I suspect I need it.

        1. 2

          The problem is not knowing that there is even something that needs to be looked up.

          Even basic things like “no, not all base-10 numbers have exact (finite) base-2 representations”, to “yes, signed zero is a thing, for very good reasons” to “please don’t forget that Inf, -Inf, and NaN are valid (and useful) values”.

      2. 4

        One of the design aims of IEEE floating-point was to reduce the likelihood of developers, who did not know much about floating-point, producing incorrect results.

        The survey asks a load of detailed questions that only a few people who have studied the subject in detail will be able to answer. Surprise! The results were little better than random.

        I’m sure I could come up with a survey to find out whether developers understood integer arithmetic, and get essentially random results.

      3. 2

        Darn. You beat me to posting exactly the same thing.

    2. 10

      Reminds me of the Tor developers advising against using floating point in their coding standards:

      Floating point math is hard

      Floating point arithmetic as typically implemented by computers is very counterintuitive. Failure to adequately analyze floating point usage can result in surprising behavior and even security vulnerabilities!

      General advice:

      • Don’t use floating point.
      • If you must use floating point, document how the limits of floating point precision and calculation accuracy affect function outputs.
      • Try to do as much as possible of your calculations using integers (possibly acting as fixed-point numbers) and convert to floating point for display.
      • If you must send floating point numbers on the wire, serialize them in a platform-independent way. Tor avoids exchanging floating-point values, but when it does, it uses ASCII numerals, with a decimal point (”.”).
      • Binary fractions behave very differently from decimal fractions. Make sure you understand how these differences affect your calculations.
      • Every floating point arithmetic operation is an opportunity to lose precision, overflow, underflow, or otherwise produce undesired results. Addition and subtraction tend to be worse than multiplication and division (due to things like catastrophic cancellation). Try to arrange your calculations to minimize such effects.
      • Changing the order of operations changes the results of many floating-point calculations. Be careful when you simplify calculations! If the order is significant, document it using a code comment.
      • Comparing most floating point values for equality is unreliable. Avoid using ==, instead, use >= or <=. If you use an epsilon value, make sure it’s appropriate for the ranges in question.
      • Different environments (including compiler flags and per-thread state on a single platform!) can get different results from the same floating point calculations. This means you can’t use floats in anything that needs to be deterministic, like consensus generation. This also makes reliable unit tests of floating-point outputs hard to write.

      https://gitweb.torproject.org/tor.git/tree/doc/HACKING/CodingStandards.md#n235

      1. 1

        Where is FP called for in tor development?

        1. 2

          rephist.c, for example.

          1. 1

            Thanks.

    3. 3

      Gerald Sussman has a talk where he says nothing scares him more than floating point numbers. I’ve always avoided looking into the abyss.

    4. 3

      Recently, I rediscovered how easy it is to get (x - y ) == x even for large values of y.

    5. 1

      Most floating point operations are easy to understand with an analogy to numbers in scientific notation (denoted with a fixed number of digits, for example 1.234E2 rather than 123.4). Then there are things like denormal numbers, which are weird and I’ll happily admit that it’s not worth the effort to understand.

      (Is there IEEE floating point behavior which can not be adequately explained by the analogy with scientific notation?)

      1. 2

        That addition and multiplication aren’t (necessarily) associative?

        1. 1

          That’s the same if you use scientific notation with a fixed number of digits: (1.00e10 - 1.00e10) + 1.00e0 = 1.00e0 but 1.00e10 - (1.00e10 - 1.00e0) = 0.

          In the last formula 1.00e10 - 1.00e0 = 1.0000000001e10, but since we only use 3 significant digits, it gets rounded to 1.00e10.

          1. 2

            but that works because you’re mixing addition and subtraction right? Floating point isn’t (always) associative with just addition or multiplication.

            1. 1

              I meant to give an example where using the scientific notation isn’t associative, to show that using scientific notation with fixed precision is mostly analogous to using floating point. For clarity, I should have used a + (-b) instead of a - b… Does it make more sense with this explanation?

              BTW, I think floating point multiplication and addition are in fact commutative.

      2. 1

        NaN, infinity, and negative zero, but since IEEE floats are a particular encoding of binary exponential notation, the analogy is quite close. However, I think you’ll find that it’s not actually very useful—people will make all the same mistakes working with limited-precision decimal exponential notation they do with floats, since it turns out people very rarely do significant manipulation of numbers in exponential notation.

        1. 4

          To me, it’s useful in the sense that I can easily understand why addition is not always associative. The floating point representation is suddenly not that mysterious anymore.