1. 12
  1.  

  2. 4

    I agree with writing code to be async-agnostic where possible, by cleanly separating pure business logic from anything concerned with I/O.

    I think advocating a callback style misses the point - a function designed to be used as a callback can only be used from non-callbacky code if it doesn’t return anything, because the only way callbacks can return (other than nasty global state) is via other callbacks. So the example of using a Protocol from sync code is true as far as it goes, but becomes untrue as soon as your Protocol wants to do anything. (E.g. if the print initiated by the Protocol was asynchronous, and later fails, the synchronous caller will never find out about that error). Explicitly-async functions still have this problem, but at least they make it, well, explicit.

    (The correct solution for code that needs to actually deal with composing multiple I/O operations and work with both sync or async is of course monads)

    1. 4

      good code structure (writing as little code as possible that requires I/O),

      -

      I agree with writing code to be async-agnostic where possible, by cleanly separating pure business logic from anything concerned with I/O.

      Their is a fundamental problem with both the async vs non-async split forced by JS and the all-effects-go-in-a-monad dogma that Haskell forces on you. Just like how Haskell requires a lexical annotation for effects (ie adding a monad argument to your function signature), JavaScript with async/await requires a lexical annotation for blocking IO (async/await) or worse explicit inversion of control via callbacks (with or without promises).

      Sure, there are frequently benefits to separating effectful code from non-effectful code. However, there are genuinely useful times when you want transparent effects (the quintessential example is memoization, and indeed a variant of this effect underlies the implementation of laziness that is pervasive throughout Haskell). By forcing unannotated effects into performUnsafeIO or by strictly separating async code from sync code, as Bob Harper says: “you are deprived of the useful concept of a benign effect”.

      A perfect example of this is the Entity API in Datomic: http://docs.datomic.com/entities.html - The Entity API is impossible in JavaScript (without a native extension that violates the threading model) and too unpleasant to be useful in Haskell (without performUnsafeIO). In short, this API performs blocking reads, fails with an exception, and capitalizes on buffered reads / local caching. Since the database that the entity API operates on is fully immutable, barring network failures (which necessarily should fail the whole aggregate procedure that is performing the reads), the behavior of the program is as if you had infinite memory in which the entire database was locally stored. That is, the code abstracts over the location (and hence asynchrony) of the data in a safe way. To get the same abstraction in JavaScript, you’d be forced to choose the async model as the lowest common denominator.

      1. 1

        By forcing unannotated effects into performUnsafeIO or by strictly separating async code from sync code, as Bob Harper says: “you are deprived of the useful concept of a benign effect”.

        I don’t understand the reasoning here. Benign effects are exactly what unsafePerformIO is for. If you want to “partially prove” the benignness of your effect then there are techniques like http://stackoverflow.com/questions/34494893/how-to-understand-the-state-type-in-haskells-st-monad . But ultimately either you want to track a given effect in the function signature or you don’t; there’s no third option here.

        That is, the code abstracts over the location (and hence asynchrony) of the data in a safe way. To get the same abstraction in JavaScript, you’d be forced to choose the async model as the lowest common denominator.

        If you don’t expose the asyncness then you still have the problem described in Unyielding, no? An async call has effects on the surrounding code, even if the call itself is benign.

        1. 1

          The main problem with unsafePerformIO is just that, the “unsafe” part. Quoting https://hackage.haskell.org/package/base-4.9.0.0/docs/System-IO-Unsafe.html

          It is less well known that unsafePerformIO is not type safe.

          Most importantly, it’s not memory safe! Where as the JVM would throw an exception, Haskell might segfault!

          Of course, you can wrap unsafePerformIO in library functions that introduce type safety or runtime-safety via exceptions. My Haskell experience is limited, so I’m not sure if such libraries exist / are popular in Haskell. I’d guess not just based on my perception of the community norms.

          the problem described in Unyielding

          That’s a big post, so I’m not quite sure what you refer to as “the problem”. Still, I don’t necessarily agree with the fundamental “threads are bad” thesis and I find the article’s arguments unconvincing. Let’s not throw the baby out with the bathwater, let’s figure out how to constraint the problem to make it work. And yes, that includes overconstraining, which Haskell does, in order to see how far that can get us, so we know where the happy medium may lie.

          1. 1

            Most importantly, it’s not memory safe! Where as the JVM would throw an exception, Haskell might segfault!

            Sure. IO is a very crude notion where we put reading from a readonly file and poking arbitrary memory in the same category of “arbitrary effects we’re not even going to try to model”. There are ongoing efforts to try to split it up into something more granular - see e.g. https://mail.haskell.org/pipermail/haskell-cafe/2008-September/047069.html or more recent free-coproducty approaches - and at that point you can distinguish between unsafeAccessFilesystem and unsafeAccessRawMemory or whatever.

            But that seems to me to be being more Haskelly, not less. I certainly don’t see how not tracking the effect at all (as you would in Java) offers any advantage here.

            Still, I don’t necessarily agree with the fundamental “threads are bad” thesis and I find the article’s arguments unconvincing. Let’s not throw the baby out with the bathwater, let’s figure out how to constraint the problem to make it work. And yes, that includes overconstraining, which Haskell does, in order to see how far that can get us, so we know where the happy medium may lie.

            I don’t think this is one of those cases where the truth lies in the middle. Explicit async doesn’t make it noticeably harder to write a function that contains a lot of yield points if that’s what you want, it just forces you to be, well, explicit about it. Like, if you implemented that Entity API in Haskell by having all the functions return IO, why would that be worse than implementing it in a language where effects aren’t tracked at all?

    2. 2

      I feel like the authors complaints are more python specific… Certainly there are times when a callback structure works great ,but there are also times when it’s a lot shorter and a lot clearer to use an async/await. The author contends that “function color” is bad, but the author is essentially just describing monads ala bind/return.I’m not a python aficionado but it seems like the author’s type system does not appropriately cover the monadic structures they are using. Arguing function color, or monads shouldn’t be used because you can do things without them is kinda like saying functions should not be used because you don’t strictly HAVE to use them and they add complexity.

      1. 1

        Nice write-up. I’ve written too much uncolored async code, and prefer colored async code. However, I wonder if async/await can be simplified in later languages.

        Right now, you need to await futures to extract the value from them. In most of the code I write (and have seen), that is the typical use case: enqueue an async operation, yield to the scheduler, await completion/cancellation, and then extract the value. Why not have await be implicit? For the occasional case of needing to work with futures directly, have an operator that prevents the expression from awaiting, and returns the future.

        You’d need this if you wanted to await any future completing out of a set, all of them, or to throw it to someone else to deal with.

        (I get that await is needed for integrating with existing systems, I just dislike how it splits the world into async/non-async functions.)

        1. 1

          This is the point Unyielding addressed, no? You can have your yield points be implicit and pervasive, but if you do that then it’s a lot harder to write correct code.

          1. 1

            Okay, I missed the Unyielding writeup. Great writeup on how green threads are not nirvana.