1. 5
  1.  

  2. 25

    Go doesn’t need async/await, it has goroutines and channels.

    1. 12

      +1

      c := make(chan int)      // future
      go func() { c <- f() }() // async
      v := <-c                 // await
      
      1. 1

        I had a negative knee-jerk reaction when I saw the async/await naming. But if you rename/rejig the api a bit, it seems a lot less bad. See: https://gotipplay.golang.org/p/IoHS5HME1bm w/context https://gotipplay.golang.org/p/Uwmn1uq5vdU

      2. 2

        this uses goroutines and channels under the hood …

        1. 8

          why do you need to abstract-away goroutines and channels?

          1. 4

            I have no idea. It’s like seeing someone recite a phone book from memory: I can appreciate the enthusiasm without understanding the why

        2. 1

          They said Go didn’t need generics either :)

          I get your point though. Hence why almost every bit of this repo screams “experimental.” I have been just playing around with the pattern in some work/personal projects and seeing how it works ergonomically and seeing if it improves areas with lots of asynchronous operations.

          But, only a matter of time until more folks begin trying to abstract away the “nitty-gritty” of goroutines/channels with generics. I personally point goroutines/channels out as Go’s greatest features, but I have seen others really want to abstract them away.

          1. 4

            Goroutines and channels are there to abstract away asynchronous code.

            1. 5

              Goroutines and channels are abstractions that are a marked improvement on the state of the art prior to Go, but I find that they tend to be too low-level for many of the problems that programmers are using them to solve. Structured concurrency (or something like it) and patterns like errgroup seem to be what folks actually need,

              1. 5

                Yeah, I also long time ago thought, that one area where generics in Go could hopefully help, would be in abstracting away channel patterns - things like fan-out, fan-in, debouncing, etc.

                1. 2

                  honestly I just want to be able to call select on N channels where N is not known at compile time. A cool thing about promises is being able to create collections of promises. You can’t meaningfully create collections of channels. I mean sure, you can make a slice of channels, but you can’t call select on a slice of channels. select on a slice of channels is probably not the answer but is a hint at the right direction . Maybe all := join(c, c2) where all three of those values are of the same type chan T. I dunno, just spitballing I haven’t given that much thought, but the ability to compose promises and the relative inability to compose channels with the same expressive power is worth facing honestly.

                  I actually fully hate using async and await in JS but every time I need to fan-out, fan-in channels manually I get a little feeling that maybe there’s a piece missing here.

                  1. 3

                    I just want to be able to call select on N channels where N is not known at compile time.

                    You can.

                    https://golang.org/pkg/reflect#Select.

                    1. 2

                      the argument that I’m making is that promises have ergonomics that channels lack, and that although I don’t think Go needs promises, that the project in question is reflective of how promise ecosystems have invested heavily in ergonomics in many scenarios that Go leaves for every developer to solve on their own. Calling reflect.Select is not a solution to a problem of ergonomics, because reflect.Select is terribly cumbersome to use.

                    2. 1

                      honestly I just want to be able to call select on N channels where N is not known at compile time

                      That’s still too low-level, in my experience. And being able to do this doesn’t, like, unlock any exciting new capabilities or anything. It makes some niche use cases easier to implement but that’s about it. If you want to do this you just create a single receiver goroutine that loops over a set of owned channels and does a nonblocking recv on each of them and you’re done.

                      every time I need to fan-out, fan-in channels manually I get a little feeling that maybe there’s a piece missing here.

                      A channel is a primitive that must be owned and written to and maybe eventually closed by a single goroutine. It can be received from by multiple goroutines. This is just what they are and how they work. Internalize these rules and the usage patterns flow naturally from them.

                      1. 2

                        loops over a set of owned channels and does a nonblocking recv on each of them and you’re done.

                        How do you wait with this method? Surely it’s inefficient to do this in a busy/polling loop. Or maybe I’m missing something obvious.

                        Other approaches are one goroutine per channel sending to a common channel, or reflect. Select().

                        1. 1

                          Ah, true, if you need select’s blocking behavior over a dynamic number of channels then you’re down to the two options you list. But I’ve never personally hit this use case… the closest I’ve come is the subscriber pattern, where a single component broadcasts updates to an arbitrary number of receivers, which can come and go. That’s effectively solved with the method I suggested originally.

                        2. 1

                          I’ve been programming Go for ten years. I know how channels work.

                          Promises can be composed, and that is a useful feature of promises. Channels cannot be composed meaningfully, and that is rather disappointing. The composition of channels has much to give us. Incidentally, the existence of errgroup and broadly most uses of sync.WaitGroup are the direct result of not having an ability to compose channels, and channel composition would obviate their necessity entirely.

                          What is it that sync.WaitGroup and errgroup are solving when people generally use them? Generally, these constructs are used in the situation that you have N concurrent producers. A common pattern would be to create a channel for output, spawn N producers, give every producer that channel, and then have all producers write to one channel. The problem being solved is that once a channel has multiple writers, it cannot be closed. sync.WaitGroup is often used to signal that all producers have finished.

                          This means that practically speaking, producer functions very often have a signature that looks like this:

                          func work(c chan T) { ... }
                          

                          Instead of this:

                          func work() <-chan T { ... }
                          

                          This is in practice very bothersome. In the situation that you have exactly one producer that returns a channel and closes it, you could do this:

                          for v := range work() {
                          }
                          

                          This is great and wonderfully ergonomic. The producer simply closes the channel when it’s done. But when you have N producers, where N is not known until runtime, what can you do? That signature is no longer useful, so instead you do this:

                          func work(wg *sync.WaitGroup, c chan T) {
                              defer wg.Done()
                              // do whatever, write to c but don't close c
                          }
                          
                          var wg sync.WaitGroup
                          c := make(chan T)
                          for i := 0; i < n; i++ {
                              wg.Add(1)
                              go work(&wg, c)
                          }
                          
                          done := make(chan struct{})
                          go func() {
                              wg.Wait()
                              close(done)
                          }()
                          
                          for {
                              select {
                              case <-c:
                                  // use the result of some work
                              case <-done:
                                  // break out of two loops
                              }
                          }
                          

                          That’s pretty long-winded. The producer written for the case of being 1 of 1 producer and the producer written for the case of being 1 of N producers have to be different. Maybe you dispense with the extra done channel and close c, maybe you use errgroup to automatically wrap things up for you, it’s all very similar.

                          But what if instead of N workers writing to 1 channel, every worker had their own channel and we had the ability to compose those channels? In this case, composing channels would mean that given the channels X and Y, we compose those those channels to form the channel Z. A read on Z would be the same as reading from both X and Y together in a select statement. Closing X would remove its branch from the select statement. Once X and Y are both closed, Z would close automatically. Given this function, we could simply have the worker definition return its own channel and close it when its done, then compose all of those, and then read off that one channel. No errgroup or sync.WaitGroup necessary. Here is an example of what that would look like:

                          func work() <-chan T {}
                          
                          var c <-chan T
                          for i := 0; i < n; i++ {
                              c = join(c, work())
                          }
                          
                          for v := range c {
                              // use the result of some work
                          }
                          

                          Here is a working program that implements this concept at the library level: https://gist.github.com/jordanorelli/5debfbf8dfa0e8c7fa4dfcb3b08f9478

                          Tada. No errgroup necessary, no sync.WaitGroup, none of that. The producer is completely unaware that it is in a group and the consumer is completely unaware that there are multiple producers. You could use that producer and read its results as if it’s just one, or one of many in the exact same way.

                          It makes consuming the result of N workers much easier, it makes it so that a worker may be defined in the event that it is 1 of 1 and 1 of N in exactly the same way, and it makes it so that consumers can consume the work from a channel without any knowledge of how many producers that channel has or any coordination outside of seeing the channel closed. Of course, implementing this at the library level and not at the language level means adding an overhead of additional goroutines to facilitate the joining. If it could be implemented at the language level so that joining N channels into 1 does not require N-1 additional goroutines, that would be neat.

                          This implementation is also subtly broken in that composing X and Y to form Z makes it so that you can’t read off of X and Y on their own correctly now; this is not a full implementation, and there’s certainly a question of implementation feasibility here.

                          1. 1

                            Channels cannot be composed

                            I don’t think I agree. It’s straightforward to build higher-order constructs from goroutines and channels as long as you understand that a channel must be owned by a single producer.

                            The problem being solved is that once a channel has multiple writers, it cannot be closed.

                            It doesn’t need to be closed. If you have 1 channel receiving N sends, then you just do

                            c := make(chan int, n)
                            for i := 0; i < cap(c); i++ {
                                go func() { c <- 123 }()
                            }
                            for i := 0; i < cap(c); i++ {
                                log.Println(<-c)
                            }
                            

                            This means that practically speaking, producer functions very often have a signature that looks like func work(c chan T) { ... }

                            Hopefully not! Your worker function signature should be synchronous, i.e.

                            func work() T
                            

                            and you would call it like

                            go func() { c <-work() }()
                            

                            Or, said another way,

                            go work(&wg, c)

                            As a rule, it’s a red flag if concurrency primitives like WaitGroups and channels appear in function signatures. Functions should by default do their work synchronously, and leave concurrency as something the caller can opt-in to.

                            But what if . . .

                            If you internalize the notion that workers (functions) should be synchronous, then you can do whatever you want in terms of concurrency at the call site. I totally agree that goroutines and channels are, in hindsight, too low-level for the things that people actually want to do with them. But if you bite that bullet, and understand that patterns like the one you’re describing should be expressed by consumers rather than mandated by producers, then everything kind of falls into place.

                            1. 1

                              It’s clear that you didn’t read the gist. Your example falls apart immediately when the workers need to produce more than one value.

                              Your worker function signature should be synchronous, i.e. func work() T

                              That’s not a worker. It’s just a function; a unit of work. That’s not at all the problem at hand and has never been the problem at hand. Maybe try reading the gist.

                              The workers in the gist aren’t producing exactly 1 output. They’re producing between 0 and 2 outputs. The case of “run a function N times concurrently and collect the output” is trivial and is not the problem at hand.

                              The workers are producing an arbitrary number of values that is not known in advance to the consumer. The workers are not aware that they’re in the pool. The consumer is not aware that they’re reading from the pool. There is nothing shared between producers to make them coordinate, and nothing shared with consumers to make them coordinate. There is no coordination between producers and consumers at all. The consumer is not aware of how many workers there are or how many values are produced by each worker, they are only interested in the sum of all work. The workers simply write to a channel and close it when they’re done. The consumer simply reads a channel until the end. That’s it. No errgroup requiring closures to implement the other half of the pattern, no sync.WaitGroup required to manually setup the synchronization. Just summing channels. The case of 1 worker and 1 consumer is handled by a worker having signature func f() <-chan T. The case of 1 worker and N consumers, N workers and 1 consumer, and N workers and M consumers are all handled with the same worker signature, with no additional coordination required.

                              1. 1

                                It’s clear that you didn’t read the gist.

                                I mean, I did, I just reject the premise :)

                                That’s not a worker. It’s just a function; a unit of work

                                Given work is e.g. func work() T then my claim is that a “worker” should be an anonymous function defined by the code which invokes the work func, rather than a first-order function provided by the author of the work func itself.

                                The workers are producing an arbitrary number of values that is not known in advance to the consumer . . . the consumer simply reads a channel until the end.

                                Channels simply don’t support the access pattern of N produers + 1 consumer without a bit of additional code. It’s fair to criticize them for that! But it’s not like the thing is impossible, you just have to add a bit of extra scaffolding on top of the primitives provided by the language.

                  2. 2

                    I think generics will make channels much easier to use correctly. The shenanigans required to handle cancellation, error reporting, fan-out with limits, etc etc etc means that very few programs handle the edge cases around goroutines. Certainly when I wrote go, I wouldn’t follow the patterns needed to prevent infinite go routine leaks, and often I’d decide to panic on error instead of figuring out how to add error channels or result structs with a null error pointer, etc.

                    What I like about Promise is that it’s a Result[T] - but ideally I’d be able to get the composition boilerplate of structured CSP stuff out of the way with generics instead of adopting the Promise model wholesale.

                    (My history: I loved writing go for many years but eventually burned out from all the boilerplate and decided to wait for generics)

                2. 5

                  So, in JavaScript, the thing that made promises cool was that instead of increasingly indenting callbacks, you could flatten them out to one level. Async goes even further by flattening that down to look like normal sync code. E.g.

                  // From this 
                  doThing(input1, result => {
                    let newResult = transform(result)
                    doThing2(newResult, finalResult => {
                        finalUse(finalResult)
                    })
                  });
                  
                  // To this
                  doThing(input1).then(result => {
                    let newResult = transform(result)
                    return doThing2(newResult)
                  }).then(finalResult => {
                     return finalize(finalResult)
                  })
                  
                  // To this
                  async () => {
                    let result = await doThing(input1)
                    let newResult = transform(result)
                    let finalResult = await doThing2(newResult)
                    return finalize(finalResult)
                  }
                  

                  The code is getting increasingly easy to read as you go from callback soup to Promises to async and because the Promise constructor is such a good abstraction, it’s easy to take a legacy callback function and write a one line promise wrapper that you can use with async.

                  None of this applies to Go, and no part of this is easier than just using channels.

                  To be fair, channels can be sort of awkward and they do need some higher level abstractions on top, but I don’t think you’ve found it yet. In particular, there’s something to be written that will simplify the interaction of context and task pools, but I don’t think a promise clone helps.

                  It will be fun to see all the experiments with generics though.

                  1. 3

                    this is what generics opponents warned us about

                    1. 2

                      “Look how they massacred my boy”

                      The biggest thing you lose here is select – which would probably work for returning a channel, much like time.After() – but at that point, you’re already using channels so shrug

                      You could save at most a few lines with something along the lines of func Async[T any](f func()T) chan T to just set up the return channel for you, maybe deal with context.

                      1. 2

                        A good video on concurrency in Go

                        https://www.youtube.com/watch?v=5zXAHh5tJqQ

                        1. 2

                          I sincerely hope this is a joke.

                          1. 1

                            To expand slightly on this: I have not understood the motivation for introducing another way of launching concurrency that is not compatible with the preexisting way of doing it, unless there is no other way. This is a thought the famous “What Color is Your Function?” article captures well. I believe Go has very good support for launching concurrent execution and usable ways of dealing with results and communication. The mainline Go implementation makes each unit of concurrency, the goroutine, cheap, all without special syntax.

                            I recognize that promises and async/await have brought real world benefits to languages that are by design designed for single core execution. Go does not fall in this category, however.

                            I see that some async/await implementations, Rust’s notably, have advanced in the direction of making stack usage known because the state in the stack is effectively allocated ahead of time. This is a good thing. However, I don’t see how this is tied to async/await/promises. The same analysis could be done for a stack (or equivalent) that is not reached via some special colored syntax.

                            In the same vein, async/await has advanced the idea that the runtime state of an execution in progress can be passed around and placed where wanted. This can be a good thing. Again, I do not see how this is tied to some special colored syntax.

                            The stack is the only dynamic data structure available to even the most basic C programs*. It’s a shame the stack is implicit and you can’t really know how much you are using, whether it’s enough etc. I think the software development community is very slowly realizing this, but in a very roundabout way. Saying something like “with async/await we don’t need to allocate a stack for a concurrent task” is very misleading. The stack is effectively there, albeit managed in a differently implemented way and accessed via special colored syntax.

                            Having said all this, I would appreciate having the benefits mentioned above in a language. I have the hope that after a decade or two we’ll arrive at a more unified approach and can throw the coloring overboard. I wish this would happen sooner, but I am not holding my breath.

                            You can program today in Go with good support for concurrency, better than in most other languages and I don’t see async/await as a need. Promises may have their place, but I see it in restricted scenarios. I hope that generics are going to give us easier and less error prone ways to deal with concurrency through channels and hopefully even something in the direction of structured concurrency.

                            *Discounting of course very low level stuff.