1. 110
  1.  

  2. 30

    Definitely worth a read.

    Unlike most language comparisons, this one is really well-written and informative.

    1. 11

      Agreed. It was making me rethink my prior swipe-left on Zig … until I got to “any function that could allocate memory has to take an allocator parameter”, and “no lambdas”, which were my deal-breakers earlier.

      I get that custom allocators can be useful. But I’ve rarely needed them in C++, and when I think of how many functions/classes in my code allocate memory, that’s a fsckton of extra allocator parameters to have to plumb through every call path.

      1. 16

        If you want a global allocator in your zig program you can put one in a global variable and use it everywhere.

        const allocator = std.heap.c_allocator;
        

        The issue is more when writing libraries - it’s nice for the user of the library to be able to choose.

        It’s not a crazy amount of threading either eg std.ArrayList takes an allocator on init and then the rest of the methods just take self. Similar patterns work elsewhere eg my compiler has a context struct that stores a bunch of global state and I stash the allocator in there.

        The lack of closures is a pain though. You can write anonymous functions, they just can’t close over state automatically. There is some design discussion on https://github.com/ziglang/zig/issues/229 that may lead to closures with explicit captures.

        1. 5

          Closures that capture all variables have been a minor regret of the Julia developers, so it might be a good idea to go slow and have explicit captures.

          Capturing all variables has caused some tricky bugs with threaded tasks (which are often defined with a closure) and make some compiler optimisations trickier than they would otherwise be.

          1. 2

            That’s really interesting. Do you have links to any of the discussions about that?

            1. 5

              I don’t know of any deep discussion of the problems with closures in general. Let me know if you find some! Jeff and Stephan talked about it briefly near the end of the State of Julia talk last year, I think.

              The issue with tasks created with @spawn closures is that it’s easy to use a reference from the outer scope (often accidentally because you reused var names or because you forgot that you should take a copy and pass that in rather than sharing), and now all of your tasks are editing the same variable in parallel. This bug has turned up in real code in lots of tkf’s work and in Pkg.jl

              Edit: there’s also this long thread about structured concurrency, which touches on some other perceived issues with @spawn. Not super on topic, but may interest you https://github.com/JuliaLang/julia/issues/33248

              1. 2

                Thanks, I’ll check it out.

                1. 3

                  The talk I was thinking of is https://youtu.be/vfxS6_Sx1Pk

                  I have no idea where they talk about it, sorry!

                  1. 3

                    According to the transcript, it might be at this timestamp: https://youtu.be/vfxS6_Sx1Pk?t=3035

                    1. 3

                      This is it, @jamii, it’s slightly different to my memory, they say they would rather have pass by value semantics for closures rather than passing in bindings.

                      I think it’s morally similar to what I said originally, but maybe not so much. Apologies if I’ve sent you on a wild goose chase!

          2. 4

            Would it be possible to specify an allocator as a comptime parameter, like the Allocator template parameter that STL collections use? Then a global allocator wouldn’t add overhead since its context is zero-size, but a local allocator could be used transparently.

            1. 1
              const std = @import("std");
              
              fn BadArrayList(comptime allocator: *std.mem.Allocator, comptime T: type) type {
                  return struct {
                      elems: []T,
              
                      const Self = @This();
              
                      fn init() Self {
                          return .{.elems = &[_]T{}};
                      }
              
                      fn push(self: *Self, elem: T) void {
                          var new_elems = allocator.alloc(T, self.elems.len+1) catch @panic("oh no!");
                          std.mem.copy(T, new_elems, self.elems);
                          new_elems[self.elems.len] = elem;
                          allocator.free(self.elems);
                          self.elems = new_elems;
                      }
                  };
              }
              
              pub fn main() void {
                  var list = BadArrayList(std.heap.c_allocator, u8).init();
                  list.push('a');
                  list.push('b');
                  list.push('c');
                  std.debug.print("{s}", .{list.elems});
              }
              

              but a local allocator could be used transparently.

              I don’t think this works out. The allocator value in this example has to be known at compile-time so it can’t be something that is constructed at runtime. It has to be global.

              1. 1

                Oh, wait, I think you’re asking for something slightly different. If the type of the allocator is known at compile time then the size is known, but the actual value can be passed at runtime and the value of c_allocator should be zero-sized. That should work, but it would require changing the allocator idiom that is currently used to let you call the methods directly rather than going through the fn pointers in Allocator.

          3. 14

            any function that could allocate memory has to take an allocator parameter

            I used to feel as strongly about this, but having written a small but functionally complete piece of software in Zig that does a lot of (de)allocation (a CommonMark/GFM implementation), the Allocator type gets explicitly referenced on 50 lines out of 4500. It turned out to be surprisingly unpainful.

            1. 3

              Good to hear! Is the code online? I’m curious to see it.

              1. 2

                Have at it! https://github.com/kivikakk/koino/

                I think it’s currently in use by one other project – only updating it to keep in line with Zig master at the moment.

        2. 13

          Very well written!

          @jamii I think you did a fantastic job at inspecting the state of things in Zig, if you have any insight that doesn’t fit perfectly in a GitHub issue, please considering giving a talk on Zig SHOWTIME, I’m sure people would be interested in hearing what experienced programmers think when getting into Zig.

          1. 8

            Nice writeup! I’m glad to see that Zig’s compile time metaprogramming is carrying its weight. It seems like a great thing to base a language around, and something I’ve been interested in for a long time.

            It’s interesting to compare that with: https://nim-lang.org/araq/v1.html

            … Nim’s meta programming capabilities are top of the class. While the language is not nearly as small as I would like it to be, it turned out that meta programming cannot replace all the building blocks that a modern language needs to have.

            I don’t know why that is (since I don’t know Nim), but of course it’s a hard problem, and it looks like Zig has done some great things here.


            If this is true in general, a plausible reason for this difference is that many of the ‘zero-cost’ abstractions that are heavily used in rust (eg iterators) are actually quite expensive without heavy optimization.

            I’m also finding this with “modern” C++ … A related annoying thing is that those invisible / inlined functions are visible in the debugger, because they may need to be debugged!

            1. 7

              Related wish: I kinda want an application language with Zig-like metaprogramming, not a systems language. In other words, it has GC so it’s a safe language, and no pointers (or pointers are heavily de-emphasized).

              Basically something with the abstraction level of Kotlin or OCaml, except OCaml’s metaprogramming is kinda messy and unstable.

              (I’m sort of working on this, but it’s not likely to be finished any time soon.)

              1. 6

                Julia has similar ideas. There is a bit more built in to the type-system eg multimethods have a fixed notion of type specificity, but experience with julia is what makes me think that zig’s model will work out well. Eg: https://scattered-thoughts.net/writing/zero-copy-deserialization-in-julia/ , https://scattered-thoughts.net/writing/julia-as-a-platform-for-language-development/

                1. 4

                  Yeah Julia is very cool. I hacked on femtolisp almost 5 years ago as a potential basis for Oil, because I was intrigued how they bootstrapped it and used it for the macro system. (But I decided against writing a huge parser in femtolisp).

                  And recently I looked at the copying GC in femtolisp when writing my own GC, which is one of the shortest “production” usages of the Cheney algorithm I could find.

                  And I borrowed Julia’s function signature syntax – the ; style – for Oil.

                  But unfortunately I haven’t gotten to use Julia very much, since I haven’t done that type of programming in a long time.


                  That said, I’d be very interested in a “Zig for language development” post to complement these … :) Specifically I wonder if algebraic data types are ergonomic, and if Zig offers anything nice for those bloated pointer-rich ASTs …

                  i.e. I have found it nice to have a level of indirection between the logical structure and the physical layout (i.e. bit packing), and it seems like Zig’s metaprogramming could have something to offer there. In contrast, Clang/LLVM do tons of bit packing for their ASTs and it seems very laborious.

                  1. 3

                    wonder if algebraic data types are ergonomic

                    Aside from the lack of pattern matching, they’re pretty good. There are a couple of examples in the post of nice quality of life features like expr == .Constant for checking the tag and expr.Constant for unwrap-or-panic. Comptime reflection makes it easy to generate things like tree traversals.

                    Zig offers anything nice for those bloated pointer-rich ASTs

                    I mostly work in database languages where the ast is typically tiny, but if you have some examples to point to I could try to translate them.

                    a “Zig for language development” post

                    I definitely have plans to bring over some of the query compiler work I did in julia but that likely won’t be until next year.

                2. 6

                  Take a look at Nim. It has GC (now ref-counted in 1.4, with a cycle collector) and an excellent macro facility.

                  1. 4

                    Nim is impressive, and someone is actually translating Oil to Nim as a side project …

                    http://www.oilshell.org/blog/2020/07/blog-roadmap.html#how-to-rewrite-oil-in-nim-c-d-or-rust-or-c

                    I tried Nim very briefly, but the main thing that turned me off is that the generated code isn’t readable. Not just the variable names, but I think the control flow isn’t preserved. Like Nim does some non-trivial stuff with a control flow graph, and then outputs C.

                    Like Nim, I’m also generating source code from a statically typed language, but the output is “pidgin C++” that I can step through in the debugger, and use with a profiler, and that’s been enormously helpful. I think it’s also pretty crucial for distro maintainers.

                  2. 5

                    I find D’s approach to metaprogramming really interesting, might be worth checking out if you’re not familiar with it.

                    1. 5

                      D’s compile-time function execution is quite similar. Most of the zig examples would work as-is if translated to d. The main difference being that in d, a function cannot return a type; but you can make a function be a type constructor for a voldemort type and produce very similar constructions.

                      1. 3

                        Yeah I have come to appreciate D’s combination of features while writing Oil… and mentioned it here on the blog:

                        http://www.oilshell.org/blog/2020/07/blog-roadmap.html#how-to-rewrite-oil-in-nim-c-d-or-rust-or-c

                        Though algebraic data types are a crucial thing for Oil, which was the “application” I’m thinking about for this application language … So I’m not sure D would have been good, but I really like its builtin maps / arrays, with GC. That’s ilke 60% of what Oil is.

                        1. 2

                          D does have basic support for ADTs (though there’s another better package outside the standard library). Support is not great, compared with a proper ml; but its certainly no worse than the python/c++ that oil currently uses.

                      2. 3

                        Julia sort of fits, depends on your applications. Metaprogramming is great and used moderately often throughout the language and ecosystem. And the language is fantastically expressive.

                        1. 2

                          I want this too, got anything public like blog posts on your thoughts / direction?

                          1. 4

                            Actually yes, against my better judgement I did bring it up a few days ago:

                            https://old.reddit.com/r/ProgrammingLanguages/comments/jb5i5m/help_i_keep_stealing_features_from_elixir_because/g8urxou/

                            tl;dr Someone asked for statically typed Python with sum types, and that’s what https://oilshell.org is written in :) The comment contains the short story of how I got there.

                            The reason I used Python was because extensive metaprogramming made the code 5-7x shorter than bash, and importantly (and surprisingly) it retains enough semantic information to be faster than bash.

                            So basically I used an application language for a systems level task (writing an interpreter), and it’s turned out well so far. (I still have yet to integrate the GC, but I wrote it and it seems doable.)


                            So basically the hypothetical “Tea language” is like statically typed Python with sum types and curly braces (which I’ve heard Kotlin described as!), and also with metaprogramming. Metaprogramming requires a compiler and interpreter for the same language, and if you squint we sorta have that already. (e.g. the Zig compiler has a Zig interpreter too, to support metaprogramming)

                            It’s a very concrete project since it’s simply the language that Oil is written in. That is, it already has 30K+ lines of code written for it, so the feature set is exactly mapped out.

                            However, as I’ve learned, a “concrete” project doesn’t always mean it can be completed in a reasonable amount of time :) I’m looking for help! As usual my contact info is on the home page, or use Github, etc.

                            Another way to think of this project is as “self hosting” Oil, because while the current sets of metalanguages is very effective, it’s also kind of ugly syntactically and consists of many different tools and languages. (Note that users are not exposed to this; only developers. Tea may never happen and that’s OK.)

                      3. 5

                        Both languages require explicit annotations for nulls (Option in rust, ?T in zig) and require code to either handle the null case or safely crash on null (x.unwrap() in rust, x.? in zig).

                        Describing Option<T> as “explicit annotation for nulls” has always struck me as missing the point a little (this is not the only essay to use that kind of verbiage to talk about what an option type is).

                        At one level of abstraction, Rust just doesn’t have nulls - an i32 is always a signed 32 bit integer, a String is always an allocated utf-8 string, with no possibility that when you start calling methods on a variable with that type, it will turn out that there was some special null value in that type that makes your calls crash the program or cause undefined behavior. This is a good improvement over many languages that do make null implicitly a member of every type, that the programmer needs to check for.

                        At a different level of abstraction, null semantics are still something a programmer frequently wants to represent using a language - that is, the idea of a variable either being nothing or else being some value of a specific type. The Rust standard library provides the Option<T> type to represent these semantics, and has some special syntactic support for dealing with it with things like the ? operator. But at the end of the day, it’s just an enum type that the standard library defines in the same way as any other Rust type, enum Option<T> { Some(T), None }. If you are writing a program that needs two different notions of nullity for some reason, you can define your own custom type enum MyEnum { None1, None2, Some(T) } using the same common syntax for defining new types.

                        1. 5

                          Since you mention it, it’s interesting that ?T could be a tagged union in zig:

                          fn Option(comptime T: type) type {
                              return union(enum) {
                                  Some: T,
                                  None,
                              };
                          }
                          

                          Instead it’s … weird. null is a value with type @TypeOf(null). There are implicit casts from T to ?T and from null to ?T which is the only way to construct ?T. There is a special case in == for ?T == null.

                          I had a quick dig through the issues and I can’t find any discussion about this.

                          And, out of curiosity:

                              const a: ?usize = null;
                              const b: ??usize = a;
                              std.debug.print("{} {}", .{b == null, b.? == null});
                          

                          prints “false true”.

                          1. 4

                            I read that like this in Rust:

                            fn main() {
                                let a: Option<u8>  = None;
                                let b: Option<Option<u8>> = Some(a);
                                println!("{:?}, {:?}", b.is_none(), b.unwrap().is_none());
                            }
                            

                            Which has the same output. This makes sense to me, as b and a have different types. Does zig normally pass through the nulls? The great thing to me in Rust is that although a and b are different types, they take up the same space in memory (zig may do the same, I’ve never tested).

                            1. 1

                              Does zig normally pass through the nulls?

                              No, your translation is correct and this is the behavior I would want. But this is something I tested early on because the way ?T is constructed by casting made me suspicious that it wouldn’t work.

                              zig may do the same, I’ve never tested

                              Oh, me neither…

                                 std.debug.print("{}", .{.{
                                      @sizeOf(usize),
                                      @sizeOf(?usize),
                                      @sizeOf(??usize),
                                      @sizeOf(*usize),
                                      @sizeOf(?*usize),
                                      @sizeOf(??*usize),
                                  }});
                              
                              [nix-shell:~]$ zig run test.zig
                              struct:79:30{ .0 = 8, .1 = 16, .2 = 24, .3 = 8, .4 = 8, .5 = 16 }
                              [nix-shell:~]$ zig run test.zig -O ReleaseFast
                              struct:79:30{ .0 = 8, .1 = 16, .2 = 24, .3 = 8, .4 = 8, .5 = 16 }
                              

                              Looks like it does collapse ?* but not ??.

                              1. 2

                                Looks like it does collapse ?* but not ??.

                                It’s not possible to collapse ?? as it would have a semantic loss of information. Imagine ?void as a boolean which is either null (“false”) or void (“true”). When you now do ??void, you have the same number of bits as ?bool.

                                ??void still requires 1.5 bit to represent, whereas ?void only needs 1 bit.

                                Collapsing an optional pointer though is possible, as Zig pointers don’t allow 0x00… as a valid address, thus this can be used as sentinel for null in an optional pointer. This allows a really good integration into existing C projects, as ?*Foo is kinda equivalent to a C pointer Foo * which can always be NULL. This translates well to Zig semantics of ?*Foo.

                                Note that there are pointers that allow 0x00… as a valid value: *allowzero T. Using an optional to them doesn’t collapse: @sizeOf(*allowzero T) != @sizeOf(*T)

                                1. 1

                                  It’s not possible to collapse ?? as it would have a semantic loss of information.

                                  ?void still requires 1.5 bit to represent, whereas ?void only needs 1 bit.

                                  I don’t think you read the sizes carefully in my previous comment. ??void actually uses 16 bits in practice.

                                  std.debug.print("{}", .{.{@sizeOf(void), @sizeOf(?void), @sizeOf(??void)}});
                                  
                                  struct:4:30{ .0 = 0, .1 = 1, .2 = 2 }
                                  

                                  Whereas if we hand-packed it we can collapse the two tags into one byte (actually 2 bits plus padding):

                                  fn Option(comptime T: type) type {
                                      // packed union(enum) is not supported directly :'(
                                      return packed struct {
                                          tag: packed enum(u1) {
                                              Some,
                                              None,
                                          },
                                          payload: packed union {
                                              Some: T,
                                              None: void,
                                          },
                                      };
                                  }
                                  
                                  pub fn main() void {
                                      std.debug.print("{}", .{.{@sizeOf(void), @sizeOf(Option(void)), @sizeOf(Option(Option(void)))}});
                                  }
                                  
                                  struct:17:30{ .0 = 0, .1 = 1, .2 = 1 }
                                  

                                  The downside is that &x.? would have a non-byte alignment, which I imagine is why this is not the default.

                                  But that’s what we were testing above. Not “can we magically fit two enums in one bit”.

                                  1. 1

                                    Okay, i misread that then, sorry. Zig is still able to do some collapsing of multi-optionals as there is no ABI definition. It might be enabled in release-small, but not in the other modes. But: This is just a vision of the future, it’s not implemented atm

                                2. 1

                                  That makes sense, and is the same as Rust. 0 is a valid bit pattern for usize and thus cannot use the null pointer optimization. In Rust you’d have to use Option<&Option<&usize>> to collapse everything since Option<T> is not known to be non-null but references (&) are. It would be neat if both Rust and Zig were able to say that Option<T> is non-null if T if non-null so you could get this benefit without the need for references (or other [unstable] methods of marking a type non-null).

                              2. 1

                                Could those decisions have something to do with C interop? Not sure how much that would affect it, but my inexperienced assumption is that using actual nulls over a tagged union would help with that.

                                1. 2

                                  Worth noting here that Rust guarantees that Option<T> is represented without a discriminant (tag) when T is a nullable pointer type or otherwise has a “niche” where you could encode the discriminant in. This even applies to fat pointers like slices or Vec (which have an internal pointer to the allocation, which can never be null).

                                  Or, more visually:

                                  fn main() {
                                      use std::ptr::NonNull;
                                      use std::mem::size_of;
                                      
                                      assert_eq!(size_of::<Option<&u32>>()              , size_of::<&u32>());
                                      assert_eq!(size_of::<Option<NonNull<u32>>>(), size_of::<&u32>());
                                      assert_eq!(size_of::<Option<&[u8]>>()               , size_of::<&[u8]>());
                                      assert_eq!(size_of::<Option<Vec<u32>>>()       , size_of::<Vec<u32>>());
                                  }
                                  

                                  (NonNull is the non-nullable raw pointer: https://doc.rust-lang.org/std/ptr/struct.NonNull.html

                                  For that reason, Option can be used in FFI situations.

                                  This is actually a general compiler feature, those composite types are not special-cased. (Declaration of a type as not being nullable is a nightly feature still, though)

                                  https://doc.rust-lang.org/nomicon/ffi.html#the-nullable-pointer-optimization

                                  1. 1

                                    That’s possible. There is a separate [*c]T for c pointers and the casts could do the conversions. But maybe that would be expensive.

                              3. 4

                                Good read! I’m more convinced than ever that Rust is right for me :)

                                Rust catches overflow in debug and wraps in release. Zig catches overflow in debug/release-safe and leaves behavior undefined in release-fast.

                                Zig aspires to insert runtime checks for almost all undefined behavior when compiling in debug mode.

                                I never liked this debug/release mode distinction. IMO, unless you’re writing code targeting some very specific resource constrained environment or maybe a hyper optimized loop, stuff like assertions (and rust panics) should be left on also in release mode. A core dump with a tripped assertion is so much easier to dig into than trying to figure out a consequent crash (or silent data loss!) due to broken invariants.

                                Rust prevents having multiple mutable references to the same memory region at the same time. This means that eg iterator invalidation is prevented at compile time …. Similarly for resizing a data-structure while holding a reference to the old allocation. Both examples are easy sources of UAF in zig.

                                In rust the Send/Sync traits flag types which are safe to move/share across threads. In the absence of unsafe code it should be impossible to cause data races. Zig has no comparable protection.

                                This is for me maybe the biggest point of Rust. We subject ourselves to the borrow-checker just to get the guarantees of compile time ensured safe code. If I don’t have that guarantee, I’d rather go all the way to some lush GC language.

                                1. 10

                                  Good read! I’m more convinced than ever that Rust is right for me :)

                                  That’s not a bad outcome. At least it was informative :)

                                  I never liked this debug/release mode distinction.

                                  I agree. I’ve been using release-safe for everything in zig, which has the same checks as debug mode. I wouldn’t object to renaming release-fast to release-unsafe. Or release-yolo.

                                  This is for me maybe the biggest point of Rust.

                                  It is a huge innovation. I think zig has also made a huge innovation on a mostly orthogonal axis. There is a lot to be learned from both, especially if we can figure out a way to combine their powers.

                                  1. 3

                                    especially if we can figure out a way to combine their powers

                                    FWIW both Swift and D are looking at integrating ownership or “static” memory management… way after the fact.

                                    I guess my issue is less whether it’s possible to bolt on e.g. to Zig, and more whether it will be a good experience and retain the simplicity of the language…

                                    https://github.com/apple/swift/blob/main/docs/OwnershipManifesto.md

                                    https://dlang.org/blog/2019/07/15/ownership-and-borrowing-in-d/

                                    1. 1

                                      Haskell also has a linear type proposal: https://gitlab.haskell.org/ghc/ghc/-/wikis/linear-types

                                      1. 1

                                        Its already merged and will be in 8.12 https://www.tweag.io/blog/2020-06-19-linear-types-merged/ well the first iteration at least.

                                        Note that linear types in Haskell != affine types in Rust

                                2. 3

                                  One point that is conspicuously missing is comparison of resource management (RAII vs defer). It seems to be an area without clear answer (see this issue‘s history: https://github.com/ziglang/zig/issues/782). Was this a non-question in practice?

                                  1. 4

                                    So far I haven’t had any difficulty using defer, but on the other hand most of the code I’ve written leans heavily on arena allocation and I also haven’t put much effort into testing error paths yet. I don’t expect to have much of an opinion either way until I’ve written a lot more code and properly stress tested some of it.

                                    I suspect that defer will be the easy part, and the hard part will be making sure that every try has matching errdefers. There’s a bit of an asymmetry between how easy it is to propagate errors and how hard it is to handle and test resource cleanup in all those propagation paths.

                                    1. 2

                                      For me, it is a non-problem. You usually see when a return value needs deferring cleanups, and it’s just a matter of typing

                                      var x = try …(…, allocator, …);
                                      defer x.deinit();
                                      

                                      it’s usually pretty obvious when a clean-up is required and if not, looking at the return value or doc comments is sufficient

                                      1. 4

                                        Does Zig suffer from the same problems with defer that Go does? e.g., It’s often quite tempting to run a defer inside a loop, but since defer is scoped to functions and not blocks, it doesn’t do what you might think it will do. The syntax betrays it.

                                        Answering my own question (nice docs! only took one click from a Google search result), it looks like Zig does not suffer from this problem and executes at the end of the scope.

                                        1. 2

                                          No, Zig has defer for block scopes, not function scopes. When i learnt what Go does i was stunned on how unintuitive that is

                                          1. 1

                                            Yeah, it’s definitely a bug I see appear now and then. I suspect there’s some design interaction here between defer and unwinding. Go of course does the latter, and AFAIK, uses that to guarantee that defer statements are executed even when a “panic” occurs. I would guess that Zig does not do unwinding like that, but I don’t actually know. Looking at the docs, Zig does have a notion of “panic” but I’m not sure what that actually implies.

                                            1. 1

                                              Panic calls a configurable panic handler. The default handler on most platforms prints a stacktrace and exits. It can’t be caught at thread boundaries like the rust panic can, so I guess it makes sense that it doesn’t try to unwind.

                                              1. 1

                                                Ah yeah, that might explain why Zig is able to offer scope based defer, where as defer in Go is tied to function scope.

                                              2. 1

                                                A “panic” in zig is a unrecoverable error condition. If your program panics, it will not unwind anything but usually just print a panic message and exit or kill the process. Unwinding is only done for error bubbling

                                      2. 3

                                        23 minutes of incremental compilation? How a simple change in an actual application causes upstream dependencies to recompile? Genuinely curious.

                                        1. 2

                                          The application is split into multiple crates (mostly to speed up compilation). The change was in a crate that most of the others depend on. It’s probably close to the worst case for that project, but I spent a lot of time debugging and benchmarking the code in that crate so I hit it often.

                                          1. 3

                                            Oh I see, so these are downstream. Makes sense. These compilation times are insane.

                                            1. 4

                                              Yep. I’ve since heard that buying everyone threadrippers has brought the times down somewhat, but it’s still a big productivity drain.