I agree with writing code to be async-agnostic where possible, by cleanly separating pure business logic from anything concerned with I/O.
I think advocating a callback style misses the point - a function designed to be used as a callback can only be used from non-callbacky code if it doesn’t return anything, because the only way callbacks can return (other than nasty global state) is via other callbacks. So the example of using a Protocol from sync code is true as far as it goes, but becomes untrue as soon as your Protocol wants to do anything. (E.g. if the print initiated by the Protocol was asynchronous, and later fails, the synchronous caller will never find out about that error). Explicitly-async functions still have this problem, but at least they make it, well, explicit.
(The correct solution for code that needs to actually deal with composing multiple I/O operations and work with both sync or async is of course monads)
good code structure (writing as little code as possible that requires I/O),
Sure, there are frequently benefits to separating effectful code from non-effectful code. However, there are genuinely useful times when you want transparent effects (the quintessential example is memoization, and indeed a variant of this effect underlies the implementation of laziness that is pervasive throughout Haskell). By forcing unannotated effects into performUnsafeIO or by strictly separating async code from sync code, as Bob Harper says: “you are deprived of the useful concept of a benign effect”.
By forcing unannotated effects into performUnsafeIO or by strictly separating async code from sync code, as Bob Harper says: “you are deprived of the useful concept of a benign effect”.
I don’t understand the reasoning here. Benign effects are exactly what unsafePerformIO is for. If you want to “partially prove” the benignness of your effect then there are techniques like http://stackoverflow.com/questions/34494893/how-to-understand-the-state-type-in-haskells-st-monad . But ultimately either you want to track a given effect in the function signature or you don’t; there’s no third option here.
If you don’t expose the asyncness then you still have the problem described in Unyielding, no? An async call has effects on the surrounding code, even if the call itself is benign.
The main problem with unsafePerformIO is just that, the “unsafe” part. Quoting https://hackage.haskell.org/package/base-220.127.116.11/docs/System-IO-Unsafe.html
It is less well known that unsafePerformIO is not type safe.
Most importantly, it’s not memory safe! Where as the JVM would throw an exception, Haskell might segfault!
Of course, you can wrap unsafePerformIO in library functions that introduce type safety or runtime-safety via exceptions. My Haskell experience is limited, so I’m not sure if such libraries exist / are popular in Haskell. I’d guess not just based on my perception of the community norms.
the problem described in Unyielding
That’s a big post, so I’m not quite sure what you refer to as “the problem”. Still, I don’t necessarily agree with the fundamental “threads are bad” thesis and I find the article’s arguments unconvincing. Let’s not throw the baby out with the bathwater, let’s figure out how to constraint the problem to make it work. And yes, that includes overconstraining, which Haskell does, in order to see how far that can get us, so we know where the happy medium may lie.
Sure. IO is a very crude notion where we put reading from a readonly file and poking arbitrary memory in the same category of “arbitrary effects we’re not even going to try to model”. There are ongoing efforts to try to split it up into something more granular - see e.g. https://mail.haskell.org/pipermail/haskell-cafe/2008-September/047069.html or more recent free-coproducty approaches - and at that point you can distinguish between unsafeAccessFilesystem and unsafeAccessRawMemory or whatever.
But that seems to me to be being more Haskelly, not less. I certainly don’t see how not tracking the effect at all (as you would in Java) offers any advantage here.
Still, I don’t necessarily agree with the fundamental “threads are bad” thesis and I find the article’s arguments unconvincing. Let’s not throw the baby out with the bathwater, let’s figure out how to constraint the problem to make it work. And yes, that includes overconstraining, which Haskell does, in order to see how far that can get us, so we know where the happy medium may lie.
I don’t think this is one of those cases where the truth lies in the middle. Explicit async doesn’t make it noticeably harder to write a function that contains a lot of yield points if that’s what you want, it just forces you to be, well, explicit about it. Like, if you implemented that Entity API in Haskell by having all the functions return IO, why would that be worse than implementing it in a language where effects aren’t tracked at all?
I feel like the authors complaints are more python specific… Certainly there are times when a callback structure works great ,but there are also times when it’s a lot shorter and a lot clearer to use an async/await. The author contends that “function color” is bad, but the author is essentially just describing monads ala bind/return.I’m not a python aficionado but it seems like the author’s type system does not appropriately cover the monadic structures they are using. Arguing function color, or monads shouldn’t be used because you can do things without them is kinda like saying functions should not be used because you don’t strictly HAVE to use them and they add complexity.
Nice write-up. I’ve written too much uncolored async code, and prefer colored async code. However, I wonder if async/await can be simplified in later languages.
Right now, you need to await futures to extract the value from them. In most of the code I write (and have seen), that is the typical use case: enqueue an async operation, yield to the scheduler, await completion/cancellation, and then extract the value. Why not have await be implicit? For the occasional case of needing to work with futures directly, have an operator that prevents the expression from awaiting, and returns the future.
You’d need this if you wanted to await any future completing out of a set, all of them, or to throw it to someone else to deal with.
(I get that await is needed for integrating with existing systems, I just dislike how it splits the world into async/non-async functions.)
This is the point Unyielding addressed, no? You can have your yield points be implicit and pervasive, but if you do that then it’s a lot harder to write correct code.
Okay, I missed the Unyielding writeup. Great writeup on how green threads are not nirvana.