1. 21
  1.  

  2. 6

    Inconvenient knowledge is better than convenient ignorance

    That speaks to me. I prefer code that says what it does, so I can understand your code and have confidence in mine.

    There can be such a thing as too much type inference, for instance. In TypeScript, all function return types are inferable. That’s great for closures, but I make mistakes when I don’t make the return type explicit for more complex named functions.

    I like Swift’s unusual try keyword that, despite being a no-op, is required to the left of any call to a function that can throw. You always know which lines program flow can escape from, and by exclusion, where it will either continue or crash.

    1. 9

      This comes across to me as Stockholm syndrome. I can’t agree with this premise after seeing how Zig implements color-less async/await:

      https://kristoff.it/blog/zig-colorblind-async-await/

      I highly recommend watching Andrew Kelley’s video (linked in that article) on this topic where he does a survey of function-colored languages and contrasts how Zig handles async/await:

      https://youtu.be/zeLToGnjIUM

      1. 5

        Aside, but I wish more Zig content was in blog posts instead of videos. Youtube is great for reaching certain demographics, but text is just way more efficient for me.

        1. 4

          Imo watching the creator of the language give a presentation provides something you can’t get from text alone. Agree in general though… I’ve come across a few youtube videos that just paste content from Stack Overflow and it’s infuriating :)

          1. 1

            I’d rather watch a YouTube video in a Stack Overflow answer than read a Stack Overflow answer in a YouTube video :D

        2. 4

          This comes across to me as Stockholm syndrome.

          Stockholm syndrome is such a bombastic phrase for what often boils down to preference. Zig offers optionally explicit effects, but I can see this being unnecessary complexity if you’re writing code that is mostly async with a few portions of synchronous code. I think it boils down to what you find ergonomic for the program you’re writing.

          1. 3

            I think it boils down to what you find ergonomic for the program you’re writing.

            I don’t understand your point at all. If given the option of not bifurcating your codebase into regular and async functions, why would you choose to? This is a language design issue, not a program design issue.

            1. 1

              If given the option of not bifurcating your codebase into regular and async functions, why would you choose to?

              If you are writing a mostly async program, you still need to go through the overhead of calling async and then awaiting it. Sure, you can compile the underlying asynchronous logic away if you want to then use this codebase in a synchronous context, but what if you know your code will not run in a synchronous context? Why even bother trying to ensure that the underlying abstraction to swap out synchronous and asynchronous code is working and just go full steam ahead with asynchronous code?

              1. 3

                but you might not know. Suppose you have a long-running function that you could either run synchronously on threads or run asynchronously with yields on task-stealing threads. The first will be faster, the second will give you more even p90 and p99 latencies if your requests come in unevenly. So to find out you must simulate your load and then apply it to both conditions. So you’ll have to write your code twice. Versus, zig, where you just write your code once.

                https://www.youtube.com/watch?v=kpRK9BC0-I8&t=233shttps://www.youtube.com/watch?v=kpRK9BC0-I8&t=233s

                1. 1

                  Sure and in situations where you aren’t sure, I agree, Zig’s compile-time ability to turn async on and off can be a help. But this just goes back to what I said upthread: “I think it boils down to what you find ergonomic for the program you’re writing”

                  1. 3

                    Well if you are a library writer then you are going to always be unsure. So surely you must concede, then, that rust is unfriendly to library writers.

                    1. 1

                      So surely you must concede, then, that rust is unfriendly to library writers.

                      I’m not sure what Rust has to do with this. Asynchronous concurrency certainly isn’t specific to Rust.

                      Well if you are a library writer then you are going to always be unsure.

                      This depends on the library you’re writing. The scenario I’m envisioning is quite simple: I’m writing a networked service that I will be deploying onto a set of known hardware. In more generic libraries, yeah I can see Zig’s compile-time async switching being a feature. Thinking a bit more about this, another situation I can see benefiting from compile-time async switching is testing, where you’d like to shut off asynchronous execution for deterministic test behavior. Regardless, I maintain that “I think it boils down to what you find ergonomic for the program you’re writing”.

          2. 3

            But Zig does have a difference between these functions. There are still two types of functions. There are limitations of which type can be called when (e.g. main can’t be async for obvious reasons, you can have nosuspend contexts).

            Zig hasn’t removed the difference, because that’s logically impossible. It only obfuscated the difference syntactically.

            To me this is a parlor trick of “look no async!”, but in fact there still is, and you’re just programming blind. Now you need to be extra paranoid about holding locks across surprise-suspension points, about pointers to parent stack frames which may not be there anymore after the next function call, about calling non-Zig blocking functions which can block async runtime which you didn’t even ask for. Or you mark function calls nosuspend and hey - you have colored them!

            1. 3

              It’s clear you haven’t actually used this in practice, but let me assure you I’ve run into none of these concerns you bring up; it does “just work”, and it works as you would expect. It’s a little bit of a challenge figuring out how to wrangle such a low level async (you have to think about what’s happening on the hardware), and yes the first time you do it you will probably mess up.

              1. 2

                To me this is a parlor trick of “look no async!”

                And yet I have a Redis client, implemented in a single codebase, that can be used by both async and blocking applications. Zig async is not without issues, but the “not having to duplicate effort” tradeoff is a pretty good one and your dismissal is missing its most relevant strength.

                https://github.com/kristoff-it/zig-okredis

            2. 4

              In the author’s sense of colored functions, E-style auditors allow for arbitrary coloring of objects, using whatever user-defined predicates a programmer might desire, with no computational ceiling to the choice of color. (An object is like a letrec over a closure.) The auditor gets to examine the source code of the object, examine the types of the object’s closure, and consult the opinions of other auditors.

              1. 2

                Any code that “feels synchronous” necessarily makes you pay for that feeling by stealing true concurrency away from you. It may seem more convenient at first, but the second you need more control you have to use much clunkier abstractions.

                Yeah, you’ve got to choose a default for what you mean when you write straight-line code without explicit concurrency markup.

                By default, do you want:

                   a = func_a()
                   b = func_b()
                

                to mean: i) run func_a, then run func_b or ii) run func_a and func_b concurrently (and plough on until an explicit join/await of some kind).

                I want (i), because I think it is easier to reason about. And - unless there is I/O going on - I think (ii) would be no faster in general (because you’re either in a single OS thread or trying to do some implicit locking of every data structure if you are defaulting to everything being concurrent).

                1. 4

                  I dunno, compilers are already allowed to reorder, intermix and generally mush around the ordering of func_a() and func_b() whenever they can prove it doesn’t violate any invariants. For example, they can interleave instructions for better instruction-level parallelism. If func_a() and func_b() are truly independent then there’s a cost tradeoff to automatically parallelizing them: running them on different cores has overhead, so you have to be able to know that the gain is worth the overhead. Generally we leave that decision to humans, but I don’t think there’s any fundamental reason we need to.

                  1. 1

                    Generally we leave that decision to humans, but I don’t think there’s any fundamental reason we need to.

                    I think this is mostly because the state space here to arrive at a good decision is still very broad. It often depends on what the initial core delegation overhead is, on how many independent cores are available on a machine, on degenerate cache coherency cases, and other complicated things. Humans are often equally unaware, but they can run load tests to gave a working understanding of the system and can optimize thusly.

                2. 2

                  Article says that red & blue functions are good things and that due to that, it is good to have async/await style of working with IO functions, which means that by default they are treated as special/async and have to be called in a special way if we want to treat them as “normal”/blocking/sync function calls, which then allows us to write clean imperative code.

                  While it makes sense to me that there has to be a red & blue division among the functions due to their nature, and if we don’t capture that in the language we are just ignoring it and going blind, I don’t see why it would be the proof that async/await is the way to go about it? I am using Haskell a lot these days, and in Haskell, red & blue manifests as IO vs pure functions. Still, it has no async/await mechanism, because functions are always blocking. If you need them to not be blocking, you explicitly fork them into a separate thread and use concurrency logic to manage its execution.

                  So the conclusion is: we can’t avoid distinguishing between red vs blue, we can just pick the default way to handle it (explicit vs implicit) and that is it, right? And I guess it depends on the problem you are solving which approach fits you better?

                  EDIT: Reading the article again, I see it actually speaks against the approach where functions are implicitly blocking (like Go, and I guess Haskell) as being confusing and not explicit enough regarding their blocking nature. Which to me sounds like a bold claim to make, taking into account that different approach might be better suited for different problem being solved, and that do adapt to a different approach it merely takes adjusting one basic assumption (which is what happens when you call IO function directly).

                  1. 2

                    Still, it has no async/await mechanism, because functions are always blocking.

                    Arguably, do-notation is a generalized variant of async/await that works with any arbitrary monad, not just promises – IO’s >>= is basically the same thing as javascript’s Promise.then.

                    1. 1

                      Hm interesting point. But if so, you could also say that in Go the newline is basically the same thing as javascript’s await. What sounds more correct to me, if doing this kind of comparison, is to say that in IO, >>= is what newline is in an imperative language. And blocking by default vs non-blocking by default (which then requires async/await if you want it to block) is just a choice of what the newline does by default. But >>= is then a newline, not directly async/await.

                      1. 2

                        something something programmable semicolons