1. 64
  1.  

    1. 8

      This is a very cool article, and ability, for zig! I’d be really interested in seeing some performance measurements, as I can see this having interesting performance results in different situations. I can see it being bad for cache locality in some cases (long arrays, small enums), as well as being good for cache locality (small arrays, big enums), so I’d really like to see some numbers.

    2. 4

      Nice article. Another approach is to pack the items densely with no padding: each item is of variable size + alignment. memcpy into a naturally aligned variable when you need to operate on them. If you need random access, not just iteration, an auxiliary array of offsets can be used. Not sure if that’d be natural in zig or rust or if that’d break all kinds of compiler guarantees. Perhaps there are some languages which allow transparent encoding for loads/stores?

    3. 2

      While creating such data structures is pretty straightforward in Zig, creating any of these examples in Rust using proc macros is basically impossible - the reason being that proc macros don’t have access to type information like size or alignment. While you could have a proc macro generate a const fn that computes the clusters for a particular enum, this function cannot be used to specify the length of an array for a generic type.

      As a Python expat who doesn’t trust himself to write memory-unsafe code, all I can say is “here’s hoping people qualified to solve these problems in Rust are paying attention”.

      1. 20

        This is the kind of work that the RustConf fiasco’s talk subject would have enabled: instead, we made sure syn/quote/serde remain dominant in the proc macro space, instead of creating a whole new reflection mechanism that would need new libraries.