1. 4

Do you prefer LEB128 or VLQ for variable-length numbers? (or something else?) What are your arguments?

  1.  

  2. 5

    I’ve not come across VLQ before, but it looks as if it’s a big-endian variant of the same encoding as [U]LEB128. I’ve only ever used [U]LEB128 and that’s because it’s in the DWARF spec, not by choice.

    Which is better depends on what you want to use them for. If you ever want to get at the low bits without parsing the whole thing, [U]LEB128 is better. If you want to get at the high bits first, VLQ is better.

    For parsing, I believe VLQ is likely to be slightly faster because you shift you accumulator on each addition, rather than shifting the value before you apply it to the accumulator. LEB128 should be faster to encode for the same reason (you’re shifting the accumulator when writing the bytes). On any modern CPU, I doubt that either of these will be measurable differences because the bottleneck will be elsewhere. On a low-power custom hardware implementation, this might make a difference.

    That said, you might consider why you’re using a variable-length encoding at all. Both of these encodings are optimised for the case where the majority of messages are very short. If you are expecting messages that are often 7-bit values and usually no more than 14-bit values, they’re efficient. If you commonly see 15-bit values, then a base-2^15 encoding may be more efficient because those messages will be 16 bits, though the next size up is 32 bits, which may be a cost that you don’t want to pay.

    If you’re decoding these things to a fixed-sized integer, that implicitly gives you an upper bound on the size you actually want to support. If you want to encode 64-bit values with a uniform distribution then you may get a more efficient encoding from an explicit length. A 64-bit value in either of these encodings needs 10 bytes, for an encoding with an explicit length it would need only 9 (or 8 + 3 bits that you might be able to pack somewhere else).

    In all cases, you’re trading some computational complexity for encoding cost. This makes sense for things like DWARF where binary size is a key concern and where every field has to be individually decodable (for unwind tables, at least: for the rest of the debug info, that’s less true). If this is the right trade-off then you might be better off using a more naive fixed-length encoding and an off-the-shelf compression algorithm. If there’s a lot of redundancy in your message format, most compression libraries do well. For our custom hardware’s trace format, I used fixed-size 256-bit records and then xz-compressed the streams. This burns a lot of cycles, but got far better than 10:1 compression ratios in the common case because there was a lot of redundancy between adjacent messages (one of the fields is the PC value, so is usually the previous record + 4). Writing a custom format to take advantage of this redundancy would have been very hard.

    1. 1

      I am not as skilled a david, but I concur that you should not golf yourself into some branchy code when you could have fixed size predictable layout and put the branchy complex code into the compression pass.

      Just compress with something that meets your criteria for size, latency, compression or decompression speed.

    2. 1

      I’ve never heard the name before, but LEB128 appears to be the same as varints as used in Golang and Protocol Buffers. That’s what I use (unsigned only), both for compatibility and because it was the only such format I knew at the time.

      There was a recent link here to a GitHub repo that implemented and compared performance of various of these formats, if you’re curious about that.