1. 3
  1.  

  2. 1

    With control-flow futures, two synchronisations (e.g. get(get(nested_future))) are necessary to fetch the result: one waiting for the task that delegates and another for the task that resolves the future. With data-flow futures, a single synchronisation (e.g. get*(nested_future)) is necessary to wait for the resolution of both tasks.

    😳 Sounds like they’re trying to figure out async semantics for some WIP language called Encore [1]. async in this case meaning waiting for some concurrent computation (not IO). Seems kinda of cool, the syntax look awful though. Julia syntax looks nicer for this type of thing [2].

    I haven’t really played with these types of distributed semantics too much, but I’d imagine a majority of my effort would be messing around with data locality & movement.

    I also find the async computation semantics a bit odd, because the concurrent pieces are already encoded in the logic. Example [x * 2 | x <- [1..1000]] the x * 2 can already been seen as concurrent—so is it faster to distribute out the x data, compute x * 2 concurrently, and bring the results back? Think of all the fun you can have figuring that out!

    The other piece sort of hanging out in the back of the room is incremental computation. Great, I wrote the most optimized way to concurrently compute some data—now do it again with just a bit more added. It also seems like a piece of the data locality issue, if later on a mutated copy of data can be computed concurrently, only moving the changes to do that seems more efficient (but maybe not!).

    Looking at stuff like this makes me feel like the gap between where I’d like programming and where we’re at is massive. So FWIW these days I just use a massive machine with 112 vCPUs and lots of ram, so I don’t have to deal with data locality.

    [1] https://stw.gitbooks.io/the-encore-programming-language/content/

    [2] https://docs.julialang.org/en/v1/manual/distributed-computing/#Multi-processing-and-Distributed-Computing