I don’t write Go, but I found this interesting, because several of the same hacks are common in C (which I do write). I have always known/assumed that things like NaN and denormals had weirdness going on and so would not be at all surprised to find that they weren’t handled correctly. But I am embarrassed to report that I have gone several decades without, until now, realizing that the floor(x + 0.5) trick gives the wrong answer when x is less than, but very close to, 0.5. On the other hand, this solution was suggested by Russ Cox in one of the linked threads, so I’m at least in good company.

The reason is obvious in retrospect. Floating point numbers have, as the name implies, a point that floats. Specifically, the closer you are to 0, the more bits are given to the fractional (“after the point”) portion of the number. Therefore, if you take the largest N-bit floating-point number that is less than but still distinguishable from 0.5 (which should be rounded to 0), and you add 0.5 to it, you get a number that isn’t distinguishable from 1.0 (and therefore is rounded, incorrectly, to 1).

Obvious in retrospect, but I think I learned this rounding trick from some C gurus when I was like 16, and never questioned it, so it is surprising!

The article kind of dances around but doesn’t seem to make clear that there is no single definition of “rounding” just as there is no single defining of “sorting”. There are multiple ways of rounding and must be chosen from as needed by a specific application.

I don’t write Go, but I found this interesting, because several of the same hacks are common in C (which I do write). I have always known/assumed that things like NaN and denormals had weirdness going on and so would not be at all surprised to find that they weren’t handled correctly. But I am embarrassed to report that I have gone several decades without, until now, realizing that the floor(x + 0.5) trick gives the wrong answer when x is less than, but very close to, 0.5. On the other hand, this solution was suggested by Russ Cox in one of the linked threads, so I’m at least in good company.

The reason is obvious in retrospect. Floating point numbers have, as the name implies, a point that floats. Specifically, the closer you are to 0, the more bits are given to the fractional (“after the point”) portion of the number. Therefore, if you take the largest N-bit floating-point number that is less than but still distinguishable from 0.5 (which should be rounded to 0), and you add 0.5 to it, you get a number that isn’t distinguishable from 1.0 (and therefore is rounded, incorrectly, to 1).

Obvious in retrospect, but I think I learned this rounding trick from some C gurus when I was like 16, and never questioned it, so it is surprising!

The article kind of dances around but doesn’t seem to make clear that there is no single definition of “rounding” just as there is no single defining of “sorting”. There are multiple ways of rounding and must be chosen from as needed by a specific application.

Seemed to me like they were pretty clear about it.