1. 28
  1.  

  2. 5

    The answer to the last question is very disappointing, the TL;DR of which is “It’s OK for 0/x = 0 because it’s defined that way”. Of course, but the same argument could be said for any other result. IEEE 754 did not define it as 0 for very good reason. It’s unfortunate to see programs that are ostensibly designed for rigor taken in such a poor direction simply on account of it being easier.

    1. 1

      I wish the author had dived a little more into the history of this. I agree with everything you said, which leads me to believe there must have been an underlying reason for this. I feel confident in saying the people who wrote these programs all know this. Surely they weren’t just lazy or something. This very much feels like a design decision or a “we couldn’t find a better way” type thing.

    2. 3

      Are these questions in fact frequently asked, or just a hypothetical dialogue to give the article structure? Maybe I don’t know type theory circles enough to expect what the article offers.

      A question I have related to division by zero in type theory: Where, if anywhere, does type theory disagree with practical type system implementations due to special cases like division by zero, and what did those tradeoffs cost? Does IEEE floating point’s special casing of certain values like NaN have similar type theory burden to, say, the null pointer sentinel value?