1. 25

    Go doesn’t need async/await, it has goroutines and channels.

    1. 12

      +1

      c := make(chan int)      // future
      go func() { c <- f() }() // async
      v := <-c                 // await
      
      1. 1

        I had a negative knee-jerk reaction when I saw the async/await naming. But if you rename/rejig the api a bit, it seems a lot less bad. See: https://gotipplay.golang.org/p/IoHS5HME1bm w/context https://gotipplay.golang.org/p/Uwmn1uq5vdU

      2. 2

        this uses goroutines and channels under the hood …

        1. 8

          why do you need to abstract-away goroutines and channels?

          1. 4

            I have no idea. It’s like seeing someone recite a phone book from memory: I can appreciate the enthusiasm without understanding the why

        2. 1

          They said Go didn’t need generics either :)

          I get your point though. Hence why almost every bit of this repo screams “experimental.” I have been just playing around with the pattern in some work/personal projects and seeing how it works ergonomically and seeing if it improves areas with lots of asynchronous operations.

          But, only a matter of time until more folks begin trying to abstract away the “nitty-gritty” of goroutines/channels with generics. I personally point goroutines/channels out as Go’s greatest features, but I have seen others really want to abstract them away.

          1. 4

            Goroutines and channels are there to abstract away asynchronous code.

            1. 5

              Goroutines and channels are abstractions that are a marked improvement on the state of the art prior to Go, but I find that they tend to be too low-level for many of the problems that programmers are using them to solve. Structured concurrency (or something like it) and patterns like errgroup seem to be what folks actually need,

              1. 5

                Yeah, I also long time ago thought, that one area where generics in Go could hopefully help, would be in abstracting away channel patterns - things like fan-out, fan-in, debouncing, etc.

                1. 2

                  honestly I just want to be able to call select on N channels where N is not known at compile time. A cool thing about promises is being able to create collections of promises. You can’t meaningfully create collections of channels. I mean sure, you can make a slice of channels, but you can’t call select on a slice of channels. select on a slice of channels is probably not the answer but is a hint at the right direction . Maybe all := join(c, c2) where all three of those values are of the same type chan T. I dunno, just spitballing I haven’t given that much thought, but the ability to compose promises and the relative inability to compose channels with the same expressive power is worth facing honestly.

                  I actually fully hate using async and await in JS but every time I need to fan-out, fan-in channels manually I get a little feeling that maybe there’s a piece missing here.

                  1. 3

                    I just want to be able to call select on N channels where N is not known at compile time.

                    You can.

                    https://golang.org/pkg/reflect#Select.

                    1. 2

                      the argument that I’m making is that promises have ergonomics that channels lack, and that although I don’t think Go needs promises, that the project in question is reflective of how promise ecosystems have invested heavily in ergonomics in many scenarios that Go leaves for every developer to solve on their own. Calling reflect.Select is not a solution to a problem of ergonomics, because reflect.Select is terribly cumbersome to use.

                    2. 1

                      honestly I just want to be able to call select on N channels where N is not known at compile time

                      That’s still too low-level, in my experience. And being able to do this doesn’t, like, unlock any exciting new capabilities or anything. It makes some niche use cases easier to implement but that’s about it. If you want to do this you just create a single receiver goroutine that loops over a set of owned channels and does a nonblocking recv on each of them and you’re done.

                      every time I need to fan-out, fan-in channels manually I get a little feeling that maybe there’s a piece missing here.

                      A channel is a primitive that must be owned and written to and maybe eventually closed by a single goroutine. It can be received from by multiple goroutines. This is just what they are and how they work. Internalize these rules and the usage patterns flow naturally from them.

                      1. 2

                        loops over a set of owned channels and does a nonblocking recv on each of them and you’re done.

                        How do you wait with this method? Surely it’s inefficient to do this in a busy/polling loop. Or maybe I’m missing something obvious.

                        Other approaches are one goroutine per channel sending to a common channel, or reflect. Select().

                        1. 1

                          Ah, true, if you need select’s blocking behavior over a dynamic number of channels then you’re down to the two options you list. But I’ve never personally hit this use case… the closest I’ve come is the subscriber pattern, where a single component broadcasts updates to an arbitrary number of receivers, which can come and go. That’s effectively solved with the method I suggested originally.

                        2. 1

                          I’ve been programming Go for ten years. I know how channels work.

                          Promises can be composed, and that is a useful feature of promises. Channels cannot be composed meaningfully, and that is rather disappointing. The composition of channels has much to give us. Incidentally, the existence of errgroup and broadly most uses of sync.WaitGroup are the direct result of not having an ability to compose channels, and channel composition would obviate their necessity entirely.

                          What is it that sync.WaitGroup and errgroup are solving when people generally use them? Generally, these constructs are used in the situation that you have N concurrent producers. A common pattern would be to create a channel for output, spawn N producers, give every producer that channel, and then have all producers write to one channel. The problem being solved is that once a channel has multiple writers, it cannot be closed. sync.WaitGroup is often used to signal that all producers have finished.

                          This means that practically speaking, producer functions very often have a signature that looks like this:

                          func work(c chan T) { ... }
                          

                          Instead of this:

                          func work() <-chan T { ... }
                          

                          This is in practice very bothersome. In the situation that you have exactly one producer that returns a channel and closes it, you could do this:

                          for v := range work() {
                          }
                          

                          This is great and wonderfully ergonomic. The producer simply closes the channel when it’s done. But when you have N producers, where N is not known until runtime, what can you do? That signature is no longer useful, so instead you do this:

                          func work(wg *sync.WaitGroup, c chan T) {
                              defer wg.Done()
                              // do whatever, write to c but don't close c
                          }
                          
                          var wg sync.WaitGroup
                          c := make(chan T)
                          for i := 0; i < n; i++ {
                              wg.Add(1)
                              go work(&wg, c)
                          }
                          
                          done := make(chan struct{})
                          go func() {
                              wg.Wait()
                              close(done)
                          }()
                          
                          for {
                              select {
                              case <-c:
                                  // use the result of some work
                              case <-done:
                                  // break out of two loops
                              }
                          }
                          

                          That’s pretty long-winded. The producer written for the case of being 1 of 1 producer and the producer written for the case of being 1 of N producers have to be different. Maybe you dispense with the extra done channel and close c, maybe you use errgroup to automatically wrap things up for you, it’s all very similar.

                          But what if instead of N workers writing to 1 channel, every worker had their own channel and we had the ability to compose those channels? In this case, composing channels would mean that given the channels X and Y, we compose those those channels to form the channel Z. A read on Z would be the same as reading from both X and Y together in a select statement. Closing X would remove its branch from the select statement. Once X and Y are both closed, Z would close automatically. Given this function, we could simply have the worker definition return its own channel and close it when its done, then compose all of those, and then read off that one channel. No errgroup or sync.WaitGroup necessary. Here is an example of what that would look like:

                          func work() <-chan T {}
                          
                          var c <-chan T
                          for i := 0; i < n; i++ {
                              c = join(c, work())
                          }
                          
                          for v := range c {
                              // use the result of some work
                          }
                          

                          Here is a working program that implements this concept at the library level: https://gist.github.com/jordanorelli/5debfbf8dfa0e8c7fa4dfcb3b08f9478

                          Tada. No errgroup necessary, no sync.WaitGroup, none of that. The producer is completely unaware that it is in a group and the consumer is completely unaware that there are multiple producers. You could use that producer and read its results as if it’s just one, or one of many in the exact same way.

                          It makes consuming the result of N workers much easier, it makes it so that a worker may be defined in the event that it is 1 of 1 and 1 of N in exactly the same way, and it makes it so that consumers can consume the work from a channel without any knowledge of how many producers that channel has or any coordination outside of seeing the channel closed. Of course, implementing this at the library level and not at the language level means adding an overhead of additional goroutines to facilitate the joining. If it could be implemented at the language level so that joining N channels into 1 does not require N-1 additional goroutines, that would be neat.

                          This implementation is also subtly broken in that composing X and Y to form Z makes it so that you can’t read off of X and Y on their own correctly now; this is not a full implementation, and there’s certainly a question of implementation feasibility here.

                          1. 1

                            Channels cannot be composed

                            I don’t think I agree. It’s straightforward to build higher-order constructs from goroutines and channels as long as you understand that a channel must be owned by a single producer.

                            The problem being solved is that once a channel has multiple writers, it cannot be closed.

                            It doesn’t need to be closed. If you have 1 channel receiving N sends, then you just do

                            c := make(chan int, n)
                            for i := 0; i < cap(c); i++ {
                                go func() { c <- 123 }()
                            }
                            for i := 0; i < cap(c); i++ {
                                log.Println(<-c)
                            }
                            

                            This means that practically speaking, producer functions very often have a signature that looks like func work(c chan T) { ... }

                            Hopefully not! Your worker function signature should be synchronous, i.e.

                            func work() T
                            

                            and you would call it like

                            go func() { c <-work() }()
                            

                            Or, said another way,

                            go work(&wg, c)

                            As a rule, it’s a red flag if concurrency primitives like WaitGroups and channels appear in function signatures. Functions should by default do their work synchronously, and leave concurrency as something the caller can opt-in to.

                            But what if . . .

                            If you internalize the notion that workers (functions) should be synchronous, then you can do whatever you want in terms of concurrency at the call site. I totally agree that goroutines and channels are, in hindsight, too low-level for the things that people actually want to do with them. But if you bite that bullet, and understand that patterns like the one you’re describing should be expressed by consumers rather than mandated by producers, then everything kind of falls into place.

                            1. 1

                              It’s clear that you didn’t read the gist. Your example falls apart immediately when the workers need to produce more than one value.

                              Your worker function signature should be synchronous, i.e. func work() T

                              That’s not a worker. It’s just a function; a unit of work. That’s not at all the problem at hand and has never been the problem at hand. Maybe try reading the gist.

                              The workers in the gist aren’t producing exactly 1 output. They’re producing between 0 and 2 outputs. The case of “run a function N times concurrently and collect the output” is trivial and is not the problem at hand.

                              The workers are producing an arbitrary number of values that is not known in advance to the consumer. The workers are not aware that they’re in the pool. The consumer is not aware that they’re reading from the pool. There is nothing shared between producers to make them coordinate, and nothing shared with consumers to make them coordinate. There is no coordination between producers and consumers at all. The consumer is not aware of how many workers there are or how many values are produced by each worker, they are only interested in the sum of all work. The workers simply write to a channel and close it when they’re done. The consumer simply reads a channel until the end. That’s it. No errgroup requiring closures to implement the other half of the pattern, no sync.WaitGroup required to manually setup the synchronization. Just summing channels. The case of 1 worker and 1 consumer is handled by a worker having signature func f() <-chan T. The case of 1 worker and N consumers, N workers and 1 consumer, and N workers and M consumers are all handled with the same worker signature, with no additional coordination required.

                              1. 1

                                It’s clear that you didn’t read the gist.

                                I mean, I did, I just reject the premise :)

                                That’s not a worker. It’s just a function; a unit of work

                                Given work is e.g. func work() T then my claim is that a “worker” should be an anonymous function defined by the code which invokes the work func, rather than a first-order function provided by the author of the work func itself.

                                The workers are producing an arbitrary number of values that is not known in advance to the consumer . . . the consumer simply reads a channel until the end.

                                Channels simply don’t support the access pattern of N produers + 1 consumer without a bit of additional code. It’s fair to criticize them for that! But it’s not like the thing is impossible, you just have to add a bit of extra scaffolding on top of the primitives provided by the language.

                  2. 2

                    I think generics will make channels much easier to use correctly. The shenanigans required to handle cancellation, error reporting, fan-out with limits, etc etc etc means that very few programs handle the edge cases around goroutines. Certainly when I wrote go, I wouldn’t follow the patterns needed to prevent infinite go routine leaks, and often I’d decide to panic on error instead of figuring out how to add error channels or result structs with a null error pointer, etc.

                    What I like about Promise is that it’s a Result[T] - but ideally I’d be able to get the composition boilerplate of structured CSP stuff out of the way with generics instead of adopting the Promise model wholesale.

                    (My history: I loved writing go for many years but eventually burned out from all the boilerplate and decided to wait for generics)

                1. 27

                  It’s worth linking to A&A’s (a British ISP) response to this: https://www.aa.net.uk/etc/news/bgp-and-rpki/

                  1. 16

                    Our (Cloudflare’s) director of networking responded to that on Twitter: https://twitter.com/Jerome_UZ/status/1251511454403969026

                    there’s a lot of nonsense in this post. First, blocking our route statically to avoid receiving inquiries from customers is a terrible approach to the problem. Secondly, using the pandemic as an excuse to do nothing, when precisely the Internet needs to be more secure than ever. And finally, saying it’s too complicated when a much larger network than them like GTT is deploying RPKI on their customers sessions as we speak. I’m baffled.

                    (And a long heated debate followed that.)

                    A&A’s response on the one hand made sense - they might have fewer staff available - but on the other hand RPKI isn’t new and Cloudflare has been pushing carriers towards it for over a year, and route leaks still happen.

                    Personally as an A&A customer I was disappointed by their response, and even more so by their GM and the official Twitter account “liking” some very inflammatory remarks (“cloudflare are knobs” was one, I believe). Very unprofessional.

                    1. 15

                      Hmm… I do appreciate the point that route signing means a court can order routes to be shut down, in a way that wouldn’t have been as easy to enforce without RPKI.

                      I think it’s essentially true that this is CloudFlare pushing its own solution, which may not be the best. I admire the strategy of making a grassroots appeal, but I wonder how many people participating in it realize that it’s coming from a corporation which cannot be called a neutral party?

                      I very much believe that some form of security enhancement to BGP is necessary, but I worry a lot about a trend I see towards the Internet becoming fragmented by country, and I’m not sure it’s in the best interests of humanity to build a technology that accelerates that trend. I would like to understand more about RPKI, what it implies for those concerns, and what alternatives might be possible. Something this important should be a matter of public debate; it shouldn’t just be decided by one company aggressively pushing its solution.

                      1. 4

                        This has been my problem with a few other instances of corporate messaging. Cloudflare and Google are giant players that control vast swathes of the internet, and they should be looked at with some suspicion when they pose as simply supporting consumers.

                        1. 2

                          Yes. That is correct, trust needs to be earned. During the years I worked on privacy at Google, I liked to remind my colleagues of this. It’s easy to forget it when you’re inside an organization like that, and surrounded by people who share not only your background knowledge but also your biases.

                      2. 9

                        While the timing might not have been the best, I would overall be on Cloudflare’s side on this. When would the right time to release this be? If Cloudflare had waited another 6-12 months, I would expect them to release a pretty much identical response then as well. And I seriously doubt that their actual actions and their associated risks would actually be different.

                        And as ISPs keep showing over and over, statements like “we do plan to implement RPKI, with caution, but have no ETA yet” all too often mean that nothing will every happen without efforts like what Cloudflare is doing here.


                        Additionally,

                        If we simply filtered invalid routes that we get from transit it is too late and the route is blocked. This is marginally better than routing to somewhere else (some attacker) but it still means a black hole in the Internet. So we need our transit providers sending only valid routes, and if they are doing that we suddenly need to do very little.

                        Is some really suspicious reasoning to me. I would say that black hole routing the bogus networks is in every instance significantly rather than marginally better than just hoping that someone reports it to them so that they can then resolve it manually.

                        Their transit providers should certainly be better at this, but that doesn’t remove any responsibility from the ISPs. Mistakes will always happen, which is why we need defense in depth.

                        1. 6

                          Their argument is a bit weak in my personal opinion. The reason in isolation makes sense: We want to uphold network reliability during a time when folks need internet access the most. I don’t think anyone can argue with that; we all want that!

                          However they use it to excuse not doing anything, where they are actually in a situation where not implementing RPKI and implementing RPKI can both reduce network reliability.

                          If you DO NOT implement RPKI, you allow route leaks to continue happening and reduce the reliability of other networks and maybe yours.

                          If you DO implement RPKI, sure there is a risk that something goes wrong during the change/rollout of RPKI and network reliability suffers.

                          So, with all things being equal, I would chose to implement RPKI, because at least with that option I would have greater control over whether or not the network will be reliable. Whereas in the situation of NOT implementing, you’re just subject to everyone else’s misconfigured routers.

                          Disclosure: Current Cloudflare employee/engineer, but opinions are my own, not employers; also not a network engineer, hopefully my comment does not have any glaring ignorance.

                          1. 4

                            Agreed. A&A does have a point regarding Cloudflare’s argumentum in terrorem, especially the name and shame “strategy” via their website as well as twitter. Personally, I think is is a dick move. This is the kind of stuff you get as a result:

                            This website shows that @VodafoneUK are still using a very old routing method called Border Gateway Protocol (BGP). Possible many other ISP’s in the UK are doing the same.

                            1. 1

                              I’m sure the team would be happy to take feedback on better wording.

                              The website is open sourced: https://github.com/cloudflare/isbgpsafeyet.com

                              1. 1

                                The website is open sourced: […]

                                There’s no open source license in sight so no, it is not open sourced. You, like many other people confuse and/or conflate anything being made available on GitHub as being open source. This is not the case - without an associated license (and please don’t use a viral one - we’ve got enough of that already!), the code posted there doesn’t automatically become public domain. As it stands, we can see the code, and that’s that!

                                1. 7

                                  There’s no open source license in sight so no, it is not open sourced.

                                  This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed. I’ll raise that internally.

                                  You, like many other people confuse and/or conflate anything being made available on GitHub as being open source.

                                  You are aggressively assuming malice or stupidity. Please don’t do that. I am quite sure this is just a mistake nevertheless I will ask internally.

                                  1. 1

                                    There’s no open source license in sight so no, it is not open sourced.

                                    This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed.

                                    I don’t care either way - not everything has to be open source everywhere, i.e. a website. I was merely stating a fact - nothing else.

                                    You are aggressively […]

                                    Not sure why you would assume that.

                                    […] assuming malice or stupidity.

                                    Neither - ignorance at most. Again, this is purely statement of a fact - no more, no less. Most people know very little about open source and/or nothing about licenses. Otherwise, GitHub would not have bother creating https://choosealicense.com/ - which itself doesn’t help the situation much.

                                  2. 1

                                    It’s true that there’s no license so it’s not technically open-source. That being said I think @jamesog’s overall point is still valid: they do seem to be accepting pull requests, so they may well be happy to take feedback on the wording.

                                    Edit: actually, it looks like they list the license as MIT in their package.json. Although given that there’s also a CloudFlare copyright embedded in the index.html, I’m not quite sure what to make of it.

                                    1. -1

                                      If part of your (dis)service is to publically name and shame ISPs, then I very much doubt it.

                            2. 2

                              While I think that this is ultimately a shit response, I’d like to see a more well wrought criticism about the centralized signing authority that they mentioned briefly in this article. I’m trying to find more, but I’m not entirely sure of the best places to look given my relative naïvete of BGP.

                              1. 4

                                So as a short recap, IANA is the top level organization that oversees the assignment of e.g. IP addresses. IANA then delegates large IP blocks to the five Regional Internet Registries, AFRINIC, APNIC, ARIN, LACNIC, and RIPE NCC. These RIRs then further assigns IP blocks to LIRs, which in most cases are the “end users” of those IP blocks.

                                Each of those RIRs maintain an RPKI root certificate. These root certificates are then used to issue certificates to LIRs that specify which IPs and ASNs that LIR is allowed to manage routes for. Those LIR certificates are then used to sign statements that specify which ASNs are allowed to announce routes for the IPs that the LIR manages.

                                So their stated worry is then that the government in the country in which the RIR is based might order the RIR to revoke a LIR’s RPKI certificate.


                                This might be a valid concern, but if it is actually plausible, wouldn’t that same government already be using the same strategy to get the RIR to just revoke the IP block assignment for the LIR, and then compel the relevant ISPs to black hole route it?

                                And if anything this feels even more likely to happen, and be more legally viable, since it could target a specific IP assignment, whereas revoking the RPKI certificate would make the RoAs of all of the LIRs IP blocks invalid.

                                1. 1

                                  Thanks for the explanation! That helps a ton to clear things up for me, and I see how it’s not so much a valid concern.

                              2. 1

                                I get a ‘success’ message using AAISP - did something change?

                                1. 1

                                  They are explicitly dropping the Cloudflare route that is being checked.

                              1. 2

                                Kinda reminds me of the Bash/Go mashup, Neugram: https://neugram.io/

                                1. 5

                                  yah, i’ve always noticed the tedium involved with cd. when i started learning go it seemed to be a bit worse which drove me crazy enough to write this little shell utility that allows for “shortcuts”: https://gist.github.com/nkcmr/6d4e5c21d73c433d79547de7ba188815

                                  allows for stuff like this:

                                  ~ > goto --define farawayfolder $HOME/go/src/github.com/nkcmr/goproj/internal/package/that/is/important
                                  ~ > goto farawayfolder
                                  ~/go/src/github.com/nkcmr/goproj/internal/package/that/is/important > cd ~
                                  ~ > goto --list
                                  farawayfolder -> /Users/nkcmr/go/src/github.com/nkcmr/goproj/internal/package/that/is/important
                                  ~ > goto --rm farawayfolder