1. 14
  1.  

  2. 11

    I am a big fan of Posits, which are the culmination of a year-long research effort in the field of ‘Unums’ which I was a part of with my Bachelor’s thesis. I analyzed an earlier approach based on interval arithmetic, which proved to be not so good, but if you are interested in the mathematics behind the “number circle” (i.e. the real numbers with just one infinity, which posits also make use of), definitely check out chapter 3.1.

    You might think that you lose something with just one infinity, but this is not the case! Even mathematically, you can express almost everything in infinite limits by approaching the infinity from below or above. Even cooler is that you can divide by zero and by infinity, and it’s all well-defined (see Definition 3.1 on page 17) and proven in this thesis.

    There are still invalid expressions (i.e. ∞+∞, ∞∞, 0∞, 0/0, ∞/∞, ∞/0), but the overall mathematics is much cleaner than the mess that we have with IEEE floating point numbers (see Definition 2.3 on page 3), which are cumbersome to use because of the subnormal number-“hack” and the many NaN-expressions.

    Regarding @olliej’s remarks: I’m a numerical mathematician and have never seen a case where one would need a signed 0 in an underlying algorithm where this isn’t just the IEEE-way to do something (but I might just be ignorant), but we can definitely agree on the signed ∞, which takes getting used to and requires more consideration in algorithm design. NaNs have one useful property, namely that when there’s an invalid calculation the error-state is propagated meaningfully. However, we waste a lot of space with NaN (see Table 2.1 on page 8, especially for low-precision floating-point numbers, where we waste 3.12% of available space for NaNs for half-precision (16-bit) floats that are very relevant for graphics cards), even though we only need just one NaN at most. Gustafson proposed other methods of error handling.

    To give my final remarks: We’ve been working with IEEE floating-point numbers for a long time and the entire ecosystem has been built around their many deficiencies. Proposing a new machine arithmetic format is as crazy as trying to boil off the ocean, I agree, but despite not having received decades of optimiziations in hardware, Posits as a new concept show so many extremely promising and impressive properties. It not only ‘works’ mathematically, where they are very simple and elegant to work with, but they also have massive advantages for hardware implementations, because they are much simpler to implement in silicon and might actually turn out to be much faster.

    Let’s see what the future brings.

    1. 5

      I work on a programming language for computer graphics. I’m not a numerical mathematician, but there’s a lot of floating point in my project, and my code is tightly coupled to IEEE float semantics. (A lot of code would be rewritten to switch to Posits, probably.) Also, I have opinions.

      I use NaN boxing to efficiently represent boxed floating point values. I stole the idea from Google’s V8 Javascript implementation in the Chrome web browser, but I think Firefox uses the representation as well. It would be cool if there were an analogous technique for Posit numbers, where a 64 bit word can represent either a pointer or a float. Perhaps you could use 64 bit Posit hardware to efficiently perform arithmetic on 63 bit posits, where the missing bit is used as a flag to indicate if the value is a posit or a pointer. That would be good and useful if that could be worked out.

      The fact that NaN != NaN in IEEE float is an atrocity. The standard equality operator in my language is an equivalence relation, where a==a, and that is not negotiable. William Kahan would not approve of my solution, but my language is not meant for numerical mathematicians.

      The fact that 0 == -0 in IEEE float arithmetic is a nuisance, from a programming language semantics perspective. @icefox posted a link showing that it is causing problems for Javascript language designers as well. It’s a problem I can live with though. I can allow this without breaking the equivalence relation axioms.

      My language supports both finite and infinite geometric objects, and the language depends on the properties of inf and -inf to represent these infinite objects. I don’t know how this would work with Posits. Maybe I could use an affine representation for points and vectors, but that is a non-backward compatible API change.

      I’m not too worried about ever having to deal with these issues, since it would require ARM and Intel processors and consumer class GPUs to drop IEEE and switch to Posit before I had to do anything.

      1. 1

        This thesis looks much more readable than the article. I really like the idea of novel arithmetical systems, it’s definitely an uphill battle to implement, but it’s worthwhile since the alternative is “We’ve tried nothing, it didn’t work, and now we’re out of ideas” is pretty lousy.

        I would definitely like to hear from @olliej on what algorithms may need +/- 0 or infinity that may not be adaptable. Not because I think they’re wrong or trying to mislead, but because if that truly is the case there is likely to be a lot of interesting mathematics there.

      2. 5

        I’m no mathematician but I’ve read enough about the design of IEEE floats to be pretty confident in saying that there aren’t many outright mistakes, just deliberate tradeoffs. Some of these tradeoffs didn’t pan out, but really most of them did quite well. The space wasted on NaN’s was originally envisioned as being used for error flags telling you more about the operation that caused the NaN. That, along with signaling NaN’s, ended up being more trouble than it’s worth, but that wasn’t necessarily obvious in the early 1980’s. Wasting half a bit on a sign bit and the accompanying -0 is inelegant, but seldom poses actual problems (though people keep finding ways to screw it up). And in return you got floating point numbers that could be made very fast, with relatively few transistors, and can give you very stable behavior if you use them with a little care.

        So, can we do better now? Are there better tradeoffs that can be made? Heck I dunno, but we should definitely try. But putting a religious right-and-wrong tone over it does nobody any good, and insults the masters of yesteryear.

        1. 3

          I wasn’t aware that “posit” was another term for Gustafson’s “unums”. From that page’s “Critique” section I was pointed to

          http://people.eecs.berkeley.edu/~wkahan/UnumSORN.pdf

          which I found quite interesting.

        2. 4

          I wouldn’t say that IEEE semantics are “necessary”. I would say that Posits are not a drop in replacement for IEEE floats. If you have numerical algorithms that are carefully written against the properties of IEEE arith, then these need to be replaced by new algorithms based on new techniques. This reimplementation of reliable old library code could be expensive, and the cost has to be justified by the lower cost of computation using posits.

          1. 4

            Ugh, the problem with posits is that what they claim is unnecessary is in fact necessary:

            • Distinguishing +/- 0 and +/- Infinity are critical to many high precision numeric algorithms (where posits will fail in similar ways)
            • NaNs are actually meaningful.

            I would argue that removing these cases is responsible for a lot of (not all) of their “same bit width” precision improvements.

            The article also repeats the complete nonsense that ieee754 isn’t deterministic and differs across platforms. It is, and it doesn’t.

            1. 3

              Honestly pretty put off by the unnecessary ‘religion’ metaphor. Made it pretty hard to follow the whole thing.