1. 65
    1. 23

      In case you missed it:

      Introduced arbitrary code execution via ${jndi:ldap://... inside any logged string. (just kidding)

    2. 8

      It’s incredible how much they get done. The last release was in mid-September.

      1. 16

        When typing up these release notes, it struck me just how much was not mentioned, because the bulk of the work done this cycle was on the self hosted compiler, and the release notes don’t really go into detail on those changes, it basically boils them down to a percent complete.

        It’s going to be wild how much we get done once that time sink is finished.

        1. 3

          The best thing about programming is that, given the right circumstances, it’s possible to build things at the speed of the brain. It really shows on a project like yours where there are passionate and intelligent people involved. Another project that comes to mind is Factor, where Slava Pestov almost single-handedly built a full stack on top of the language, including audio, OpenGL, text kerning, IDE, …

          No pressure but I really admire what you’re doing. Thanks for reminding me what is possible to achieve.

    3. 6

      The addrspace keyword seems quite nice for microcontrollers and the like

      pub const will_be_placed_in_flash: i32 addrspace(.flash) = 123;
      
      pub fn readFlash(ptr: *addrspace(.flash) i32) i32 {
          return ptr.*;
      }
      
      1. 2

        Wow this is very handy. I assume we’ll be free from trying to understand linker scripts in the future?

        1. 10
      2. 1

        I’m super excited to revisit some Zig GBA stuff that’s floating around online with this.

    4. 6

      I’m excited what the incremental compiler might mean for things built on Zig, such as https://github.com/dundalek/liz – apparently it might allow building a true REPL.

    5. 4

      Some thoughts:

      @minimum and @maximum

      Why not @min and @max?

      usingnamespace No Longer Affects Identifier Lookup

      Ugh. In most of my projects I tend to have the equivalent of “common.h”, where all the basic types and globals are defined. In Zig I would use usingnamespace to import them all into local scope, so that I could reference them with minimum fuss. I’m probably going to have to fix 80% of identifiers in my 15kloc Zig project now when upgrading.

      And yeah, I know I was one of those “abusing” usingnamespace with that usage. And I’m aware of the benefits of reducing the amount of non-local awareness so as to increase code readability. But what’s the big deal about skimming over one file (types.zig) to understand what all those types are? I wish Zig would leave it up to the programmer to decide this.

      Saturating Arithmetic

      This is great! Now I can delete some more clumsy helpers from utils.zig

      Compile Errors for Unused Locals

      Thanks, I absolutely hate it. And just to think that unused functions are going to become errors as well…


      PS: apologies if this is a bit incoherent, I’m not feeling all that well at the moment.

    6. 1

      I really like where Zig is going. My main worry design-wise at this point is anytype duck typing. This relates to the recently discussed “allocgate” because it’s another way to do polymorphism. It’ll be great if there’s a standard way to do runtime polymorphism, as planned in ziglang/zig#10184. But for compile-time, you have to use (foo: anytype) or (T: type, foo: T). Either way, zls can’t provide completion or anything else because it knows nothing about T. This is just like C++ template duck typing, which C++20 Concepts is fixing.

      I asked about this in the Discord and people seemed to think that std.io.{Reader,Writer} is the only case where this pattern is used pervasively. But I have a hard time imagining it will be restricted to that as people start to write lots of libraries in Zig. Imagine reading/editing Rust code with all trait bounds hidden (to you and to rls), it would be a nightmare. I’m happy to be proven wrong!

      1. 2

        Either way, zls can’t provide completion or anything else because it knows nothing about T. This is just like C++ template duck typing, which C++20 Concepts is fixing.

        We plan to basically upstream zls into the self-hosted compiler and have it provide compile-time “understanding” to zls using the same --watch mechanism that should enable incremental compilation.

        1. 2

          I want to clarify that I have not looked at the ZLS source code and cannot vouch for its quality or whether or not we will literally upstream it. Also, the protocol that the compiler will support will be our own language-specific protocol which is more powerful and performant than LSP. There will need to be a third party proxy/adapter server to convert between what e.g. VSCode supports and what the Zig compiler provides.

          1. 1

            ZLS is a bit of a red herring, I didn’t actually mean to focus on it. Consider C++ and Rust here:

            // C++: runtime polymorphism
            void foo(MyReader* r) { ... }
            // C++: compile-time polymorphism
            template <typename R> void foo(R r) { ... }
            // C++: compile-time polymorphism with concepts
            template <typename R> requires MyReader<R> void foo(R r) { ... }
            
            // Rust: runtime polymorphism
            fn foo(r: &mut dyn MyReader) { ... }
            // Rust: compile-time polymorphism
            fn foo<R: MyReader>(r: &mut R) { ... }
            

            C++ templates are a lot more powerful than Rust traits. You can do all kinds of things with SFINAE, static assertions, etc. In Rust you can’t express even simple logic like negative trait bounds. However, pre-c++20 templates suck because you have no idea what R is. It’s duck typed: if it compiles, it compiles. This means:

            • You have to rely on non-machine-readable comments, much like types in dynamic languages before TypeScript/Mypy/Sorbet/etc. These can be inaccurate or outdated.

            • Editor tooling is hamstrung: no jumping to the definition of MyReader, no autocompletion after typing “r.” in the body of foo, no way to find all uses of MyReader (it’s just in a comment).

            Zig feels similar to C++ without concepts in this respect. If you want to change from dynamic dispatch (e.g. Allocator) to static (e.g. reader/writer), you sacrifice a lot. If you take reader: anytype, you have a Turing-complete language at your disposal to determine what types are allowed. It seems to me that in order to get the benefits of a restricted system, e.g. “any type that implements MyReader”, you need language support.

            I suppose there could be a convention of calling (e.g.) std.io.AssertReader(@TypeOf(reader)) at the beginning of methods. This would give nice error messages, at least. But it feels a bit hacky for something like ZLS to hardcode recognizing this convention.

        2. 1

          This sounds cool, but can you elaborate on how that helps the situation? Will it get extra “understanding” from all the call sites? It seems to me that by design, we’re stuck with English prose describing what sort of types T are expected, and it won’t be possible to (e.g.) command-click into the Reader interface or get autocompletion for its methods. I know Zig’s comptime programming is a lot more powerful than Rust traits, but I wonder if there’s a way to get the best of both worlds.

          1. 2

            I think I can’t really give you a good answer until we start working on the thing but in general ZLS right now can only look at the AST to reason about types, while the compiler also implements semantic analysis so ideally some of that machinery can be used to provide suggestions etc.