1. 10
    1. 2

      This post sounds like it’s overcomplicating the concept in order to be more philosophical than it really needs to be. It says straight-up what the real problems are, but then buries it immediately to try to come to some new conclusion that doesn’t need to be reached.

      The problem really is just nondeterminism. If you take that out, it’s hardly an issue of concurrency anymore. Now you’re just making sure all of the steps to perform a multi-series action all occur, and the only difficulty is ensuring all the different places those steps occur need to be run at specific times. This is in contrast to how scheduling really works, where, essentially, different parts of your program run at complete random, and some may even never run at all! It’s actually kind of insane if you don’t take into account that it has to be that way to let other programs run.

      In fact, just a few days ago, I wrote a part of my program that has to perform the following, in order: 1. Send a job ID to a queue in an actor that handles receiving messages from a server, 2. Send a message to a server, 3. Wait for a response from the server with that same ID so it can be removed from the job queue. I specifically wrote some code to wait for step 1 to finish, rather than firing it off and forgetting it and going to step 2. I even wrote a comment why I did it that way. It makes sense not to need that, right? I’m storing a value locally, in-memory, then sending a TCP request, then waiting for a response to that TCP request, and expecting the first operation to finish before the third one. But never assume your OS/async runtime scheduler isn’t infested with gremlins, and instead just encode the actual thing you want to happen into the code, instead of letting the OS decide what happens.

      All this is made easier with: state machines, async (as in, allowing one thread to wait for another thread to complete, which itself is just another state machine anyway), mutexes (which the author mentions, and is, again, a state machine), and maybe some of those fancy testing frameworks that try to get rid of nondeterminism in testing entirely by performing some magic to ensure the code runs in every premutation possible.

      The state machines probably play a bigger role than I’m giving them credit. If your program’s state is specified clearly enough to allow it be mapped into a proper state machine, you’ve pretty much solved every problem you’ll ever have, barring implementation details like synchronizing access to state. I guess that’s the point of math, though, right? A function is just a map over some values. And ideally, our programs would be similar!

      1. 2

        I tend to agree with this.

        Part of what I like about coroutines / async is that this style gives you more control over where the state-splitting can happen, and make it explicit where it is, I.e. only places where you say the magic word “await” or “yield”.

        1. 0

          I have implement whole Chacha20 machinarium on coroutines on PicoLisp.

          You could do it on your language too.