1. 11
1. 5

The talk is a bit old, and given I was involved in the research back in 2016/17, I’d like to share some information here.

There are three versions of Unums. Type-1 Unums were described in the book by Gustafson, as referenced in the talk. Type-2 Unums were a set-theory-based approach I studied in my bachelor’s thesis. Probably influenced by the results of it Gustafson went back to an approach based on the Type-1 Unums called Type-3 Unums (“posits”) a few months later which were more hardware-friendly and closer to IEEE 754 floating-point numbers. Refer to Gustafson’s paper for the posit arithmetic (Type-3 Unums), as the other types are not as important.

The paper is very well-written and will most probably convey more information in 40 minutes of reading time than watching this 40-minute talk.

1. 1

Thank you for both links! I have Gustafson’s book on my shelf, and it’s fascinating… but I haven’t yet put any effort into following it up. My intention was to see how much of a hardware implementation I could build. I will definitely study the posit paper before I start down that path.

2. 1

I’m not a heavy user of floating point arithmetic, but I’ve been bitten by it enough times to be interested in learning about alternatives. One basic thing that’s always bothered me about Unums/Posits is that there’s three fields (the regime, the exponent, and the fraction) that all vary in size inside a bit vector that also varies in size, and it’s not very well explained how those sizes are chosen, or affect one another.

Reading between the lines of the paper that @FRIGN linked, I think the answer is something like:

• Posits are a pattern for representing numbers, and the pattern can be tuned for specific applications
• Posit(E,N), where N > 0, E <= (N - 1), is a specific instance of the Posit pattern, where N is the total storage size in bits and E is the (maximum?) number of bits used for the exponent
• In any Posit based calculation, all Posits must share the same values for E and N (or at least E; perhaps N can be filled out with padding, like sign-extension for signed integers?)
• When presented with N bits representing a Posit:
• the first bit is the sign bit
• the next bits are the regime, which ends at the first inverted bit as described in the paper
• if there are fewer than E bits remaining after the sign bit and regime, they represent the exponent (but is it the most or least significant bits of the exponent?)
• if there are E or more bits remaining, the next E bits are the exponent
• any remaining bits represent the fraction (with an implicit 1 bit at the beginning)
1. 3

The IEEE 754 floating-point numbers have a fixed size exponent and mantissa. There are some tricks to still squeeze out some precision near zero using subnormal numbers (see Chapter 2 of my thesis for a complete introduction), but in general it’s a relatively “rigid” structure. Another point is a high amount of waste with regard to NaN representations (there are a lot, see Table 2.1 on page 8), ranging between 0.05 and up to 3.12%.

Not to become too technical, but the revolutionary idea behind posits is the following: Posits skews the idea of an exponent a bit using a concept of the “regime”, and you end up with no wasted representations. There is no concept of NaN with posits, and instead an interrupt is called. I think this is a cool idea, as NaN represents an “action” rather than a value, which is bad design.

I agree with your sentiment that Gustafson’s paper is hard to read when it comes to these things. This is why I chose to build a theory for Type-2 Unums in my thesis, as the introduction of them was equally difficult with the slides presented back then. Everyone has their strengths and weaknesses. I really like Gustafson’s visualizations, but it lacks in formality. Maybe I’ll come around to writing a paper with a formal introduction at some point, but there actually has been a refinded published version of Gustafson’s paper here.

For a critical look at posits, I recommend this paper.