1. 35
  1. 17

    I remember seeing Zig show up a while back and being intrigued (since C is the language with which I am most familiar, “C without the warts” is a tempting proposition) and it seems as though it has come a long way since I last saw it; however, I still have a lot of reservations about it.

    Here are some of my favorite things of where it sits now:

    • Use of a Maybe/None type rather than implicitly-nullable types
    • Use of an Either type (they do not call it this) to carry error information instead of exceptions or a global error value
    • Enforcing the handling of returned errors (completeness)
    • @int_type(inline is_signed: bool, inline bit_count: u8) -> type
      • If I understand this correctly, this is a layer on top of LLVM’s arbitrarily-sized fixed-width integer arithmetic which will allow me to have integral types of pretty-much whatever size I want so long as that size is known at compile-time. This means that I can use a u3 for a type that really only needs 8 states, etc (I have been wanting this in languages ever since I found out that LLVM allows it)…
      • Also on this point: first-class types without using a pre-processor; love it!
    • @compile_var for native conditional compilation without a pre-processor; love it!
      • Actually, most of what they’ve done to get rid of the pre-processor looks awesome
    • Goals and separation between debug and release builds (UB always erroring out in-debug? Yes please!)

    Here are some of my least favorite things from where it sits:

    • No support for long double or f128 (note that LLVM supports both of these on architectures which support them)—or does it? It’s mentioned in the style guide but not in the language reference…
    • There is no char or glyph type. I understand that on most modern platforms, a char is 8-bits and therefore functionally equivalent to an int8_t (assuming that your platform/implementation yields char to be signed). However, I like having a separate type that is specifically meant to represent characters (or, better yet, utf32 codepoints since that’s as close to a deterministic character type that you can get—this would be especially nice since the maximum codepoint for UTF-8 is contained in 21 bits, so we could have glyph functionally be a u21—yes, I know that utf32 codepoint != character)
      • In addition, if Zig is really going to replace C, then throwing out a lot of its portability is not a great idea. For example, char does not have to be 8 bits long, it is as long as the machine’s byte (which could be as little as 7 bits and still be valid for C, and could be arbitrarily large)—helpful to know since this is the basis for array-indexing. In addition, C does not specify that char is signed or unsigned (you can easily determine this using limits.h, but even then, technically, char, unsigned char and signed char are all three disparate types according to the standard—ISO C does not specify that char should be equivalent to one of the other two, unlike other integral types).
    • The for loop syntax is not at all what I would expect. Whenever I have ever seen a sensible implementation of a for-in loop, it is always read as “for something in somecollection”. The for loop for Zig seems backwards: for (args) |arg| {…. I would never expect this and I cannot think of a sensible way to read this so that it does not always confuse me.
    • There are a million extra sigils on things that I did not expect (in some ways, Rust had this problem earlier on in its life too). @ on identifiers of built-in functions, % on types which can be an error (i.e., it’s an Either), ? on types which can be a None (i.e., it’s a Maybe), %% on function calls to automatically unwrap an error, %% as an operator to specify some error unwrapping behavior, % on return for automatically wrapping a value with a possible error, % on defer for conditional deference (how many things can we think of to use % for?). I understand the want to make things more explicit, and I understand not wanting to add a lot of keywords to keep the syntax simple, but overloading one symbol over and over again makes it a lot more difficult to mentally parse—at least, for me, it does.
    • Heavy lack of documentation. I fully understand that Zig is still very young and that docs often take a backseat, but not having anything formal makes it really hard to know what is supported now and what isn’t. For example, the exact semantics of how switch works are still a mystery to me.
    • I have seen people say that Zig supports Algebraic Data Types (presumably through some form of anonymous union like how they created ? and %), but I have found no documentation on this, and I cannot tell whether or not switch can pattern match on them or not. If this is not the case and ? and % are special cases rather than ADTs, that is a major count against the language in my opinion. Grafting a Maybe/Either onto the language without the part that makes them the most powerful would be distressing.
    1. 10

      Hm. As the developer of Myrddin it’s interesting to read this, since it seems like several of the “like” points are unhandled there, but many of the things you complain about are handled.

      • Maybe/None isn’t built in, but is used everywhere, as part of the standard library.
      • Either is also used pervasively, but is a library thing.
      • Custom integer widths are not supported
      • @compile_var is another no. I have thought of allowing $ENV, but it feels icky. Instead, though, the build tool knows about platform tags, and will select files appropriately. This is how support for different system call libraries, assembly quirks, etc are handled.

      However:

      • No support for long double or f128: Also no, but no philosophical objections
      • char is a single unicode code point. Always.
      • The for loop syntax is for pattern in iterable. Note, the pattern can be anything that can be matched in a match statement, so the for loop also filters.
      • The only sigil is the backtick, used to mark union tags.
      • I don’t have nearly as much documentation as I’d like, but it exists
      • Algebraic data types are all over the place. As I said, std.option(@a) and std.result(@ok, @err) exist as library features.
      1. 2

        Hey, quick question about Myrddin. Someone was asking about it elsewhere, and I am not as familiar with the language as I’d like. Is Myrddin memory safe? I see that the spec says “doesn’t support pointer arithmetic” but that’s as close as I could find in my admittedly quick read.

        1. 6

          No, it’s not memory safe. There’s some stuff that I’d like to do to improve things, but at the moment you can still shoot your foot off accidentally. The difference from C is that the gun isn’t pointed at your foot by default.

          1. 2

            Great, thanks. :)

        2. 1

          Cool! I remember taking a look at Myrddin a while ago as well (though I was at least partially turned off by the mix of styles between scripting languages (à la bash) and C—note, this is not saying that your language is bad because of it; I just have strong opinions about what looks nice :P).

          I will have to take another look at it to see how it goes!

          It is cool to hear that char is a utf32 codepoint, but does that mean you do not have a type that is guaranteed to be byte-sized (as char is for C)?

          1. 2

            It is cool to hear that char is a utf32 codepoint, but does that mean you do not have a type that is guaranteed to be byte-sized (as char is for C)?

            It has byte sized types. They’re called ‘byte’. Strings are actually sequences of bytes, not of chars – you unwrap them to char when you iterate over them.

            var b : byte = 65
            var c : char = 'a'
            var u8str : byte[:] = "ab∂"
            
            for c in std.bychar(u8str)
                 use(c) /* iterates 'a', 'b', '∂' */
            ;;
            
            for sub in std.bysplit("semicolon;separated; string", ";")
                  use(sub) /* iterates "semicolon" "separated" "string"
            ;;
            
          2. 1

            Myrddin is interesting. When looking up ML implementations, I discovered something you might be interested in that had a bit different take:

            http://people.cs.uchicago.edu/~blume/papers/nlffi.pdf

            One can effectively code C in ML with some other ML advantages. Closest thing I’ve seen to Myrddin in terms of main goals of C with some functional extensions. I wonder what the state-of-the-art is on the C FFI’s in safe languages like ML outside straight, formal verification. Not sure I have papers on it. Also, such an embedding in CakeML might allow a sort of shortcut toward certified compilation of C without CompCert. Think that sounds like a good idea or is my sleep-deprived brain reaching too far? ;)

            Note: I know seL4 did something like that but I don’t recall the method. I’m assuming it was very formal whereas I’m talking just running something like nlffi-style, C program through their compiler without the HOL stuff.

        3. -5

          instead of creating your own programming languages, improve existing ones

          1. 13

            This is non-constructive… Which language do you thing Zig overlap with? The closer that come to mind would be Rust, but Zig definitely seems to intend to be lower level than Rust and the C compatibility seems to be one of the drive of the project (Unlike Rust where many C concept doesn’t translate as well to Rust and require a bit more of glue code (Which is totally fine, but not as convenient)).

            1. 3

              I’m with you here. I can totally understand the frustration with new programming languages coming out every day, but really, WHY? Those that attract an audience will prosper, and those that don’t will barely make a ripple in the overall computing pond.

              Why fling poo? If you think it’s pointless, just ignore it and let it die unloved.

              1. 2

                Maybe it’s non-constructive, but I can’t really see anything actually interesting or unique about Zig. It just seems to be adding syntax to things … because.

                And maybe that’s more just that the front page is not terribly informative on how the language works, and instead just throws you in the deep end, but it really just looks like someone said “you know what, I think C needs to look more like Ruby, and totally incomprehensible to everyone else”

                1. 4

                  It could be said that most langage are just adding syntax to thing that could boil down to more C boilerplate, especially system programming langage. I see Zig as a nice attempt to modernize C. Sane error handling, Maybe instead of Null, generics, explicit integer overflow wrapping, sane preprocessor instead of #ifdef/#endif hell, C header “module” that just end up exploding the compile time, etc.

                  I don’t see where you get the Ruby feeling from this. Zig doesn’t look like implicit magic all over the place, and rather remove some of the magic/undefined behaviour seen with C.

                  1. 2

                    I think you’re conflating Ruby and Rails, I feel like the language on its own doesn’t have that much magic involved, but a lot of the community, and any Rails project, has a whole lot of magic in certain things.

                    Where I see the similarities with Ruby are mainly that Zig uses a pipe character for seemingly indecipherable things to people who are new to the language, and there seems to be some magic in when you use the percent symbol. In general, based on the code samples, it also looks like it’s a big fan of just throwing special characters into lots of places to create syntax, like Ruby.

              2. 11

                If people followed your advice, there would only be one programming language, which would probably be LISP, which would then contradict your advice because LISP is already perfect and can not be improved.

                1. 1

                  Tell that to Clojure, Scheme, and other lisps :P

                2. 2

                  The point of this language is to enable incremental migrations from C to a language that is significantly safer than C, yet comparably performant. It seems to me that there are no still-living languages that are as easy to migrate incrementally without either carting over all the sharp edges and undefined behaviors of C or changing the performance profile of your program substantially.

                  Stories with similar links:

                  1. Introduction - The Zig Programming Language authored by andrewrk 4 years ago | 58 points | 42 comments