1. 19

    I think this article has a lot of straw men and misrepresentations.

    My biggest complaint on the matter at this point in the language’s life is the doublethink.

    There’s no specific logical rule that says that if you support blessed generics then you must also support generics in the hands of a user. If your philosophical stance is that generics make code harder to read and that there’s value in having an ecosystem whose use of generics is severely constrained, then it’s perfectly reasonable to bless some use of generics and not others. Ian’s post demonstrates this philosophy perfectly well in my opinion. They tried implementing slices as a library, and people were so annoyed with it that they built it into the language. “So annoyed” seems like a function of frequency of use (among other things), so it seems reasonable to conclude that if doing annoying things less frequently is less annoying overall, then “blessed” generics would be a reasonable position to hold.

    Considering Go’s lineage and its aim at the “ordinary” programmer, I’d argue safety—in this case—should be the language’s responsibility.

    This analysis seems very incomplete to me. Clearly, safety is actually a goal of Go. What the OP seems to be asking for is more expressive power without sacrificing safety. But it’s a trade off. You don’t get it for free, so it’s weird to just come out and say, “more safety please” without discussing the downsides.

    Generics are not the source of complexity in API design, poorly designed APIs are.

    I don’t believe this for a single second on multiple dimensions. Firstly, if it’s easier to produce badly designed APIs with generics, then I’d consider generics to be a source of complexity in API design. Secondly, even if that’s not true (I think it is though), generics can fundamentally add more cognitive load that the programmer has to deal with. My experience with this second point varies wildly among individuals, and I think there are mitigation strategies as well.

    Anyway, my point here is that such a broad sweeping claim that generics never result in complexity on their own is not at all supported in my experience.

    In Go, I would argue more complexity comes from the workarounds to the lack of generics—interface{}, reflection, code duplication, code generation, type assertions—than from introducing them into the language, not to mention the performance cost with some of these workarounds. The heap and list packages are some of my least favorite packages in the language, largely due to their use of interface{}.

    I mostly agree with this, but I don’t think this supports the previous claim.

    Another frustration I have with the argument against generics is the anecdotal evidence—”I’ve never experienced a need for generics, so anyone who has must be wrong.” It’s a weird rationalization.

    I think this is the worst straw man in the entire article. It’s basically setting up a position that is impossible to agree with, but doesn’t actually provide any evidence that this is the position that most people hold. For example, if the OP had used, “Generics aren’t something I’ve need to use often, and while I do occasionally miss them, I’m generally happier that the language and the surrounding ecosystem isn’t burdened by complexity,” instead, then the position would have seemed a lot more reasonable and much harder to refute.

    Which position is common? I don’t know. But don’t pretend like you know which position is held by the majority, then knock it down and act like the argument is over. My point here is that the opposing side can be a lot more nuanced than, “I’m right and you’re wrong.”

    Someone pointed out to me what’s actually going on is the Blub paradox, which Paul Graham coined.

    While I recognize that the Blub paradox is probably a thing, I really can’t stand how it’s most commonly invoked. It’s incredibly snobby. And I can’t help but feel that it’s being invoked in this context in exactly that sort of way: “My programming language preferences are better than yours because I’ve seen the light.” What about people who have “seen the light” but still disagree with your position? The existence of those people is precisely the sort of evidence I’d like to submit to say that the argument presented in this article is woefully incomplete.

    I think the only people this article manages to disagree with are the people who see zero value in generics. I’m sure those people exist, but is that really what the OP set out to do? And if so, I wish they would just say that in the first place.

    1. 24

      Clearly, safety is actually a goal of Go. What the OP seems to be asking for is more expressive power without sacrificing safety. But it’s a trade off. You don’t get it for free, so it’s weird to just come out and say, “more safety please” without discussing the downsides.

      See, this is the argument I can’t reconcile. Go wants to be safe (though I argue things like nil and the error handling don’t support this well), but provides poor compile-time type safety because of interface{}. interface{} is a thing because generics don’t exist. Generics don’t exist because complexity, and Go also wants to be a simple language. So we can still enforce runtime type safety with type asserts. Though error-prone, this can work. However, you now have added complexity throughout your codebase for handling various type conversions from interface{} and more surface-area for screwing up. Is that complexity low enough that it justifies the lack of generics? That seems like the core of the debate.

      1. 10

        I’m in the middle of reading a C++ library right now that, as far as I can tell, templated a bunch of things just because they could for no obvious benefit. The type signatures of their methods are now more complex than they need to be. If the library were in Go, they wouldn’t have templated anything because they couldn’t, and they wouldn’t have used interface{} because there was no need to do so. The difference is that templates are an encouraged practice in C++, but everybody knows to hate interface{} in Go, and to only use it when there’s no other way. At least, that’s my experience anyway. In my experience, the C++ library I’m looking at is a microcosm of what generics does to ecosystems.

        To address your specific point… I do think that using interface{} in lieu of proper generics can be more complex, but this is only after you’ve determined that some kind of generics are the best way to solve your problem. If you back up before that point, an expressive type system might encourage you to make something generic even if it doesn’t have to be. I think this can be a good thing, but it isn’t always and requires good judgment.

        I feel like this argument would be clearer if you just moved the expressive needle up more. For example, should we always seek to add more expressive power to our type system? If a type system doesn’t have a way to express higher-kinded polymorphism, should we endeavor to add it? Or can we learn something from the experience of others that sometimes abstractions that are powerful are also simultaneously harder to understand? If we can play those arguments at that level, then we can play them at a lower level too. It’s just a matter of degree.

        That seems like the core of the debate.

        I think I agree, but I think the OP could have done a much better job at expressing it.

        1. 4

          …they wouldn’t have used interface{} because there was no need to do so.

          But there was no need to use templates and they still did, so I’m not sure you can make this assumption.

          1. 3

            The next sentence said:

            The difference is that templates are an encouraged practice in C++, but everybody knows to hate interface{} in Go, and to only use it when there’s no other way.

            I’ve read a lot of Go code and I’ve read a lot of code in languages with more expressive type systems. Unnecessary generics is a problem in the latter but not the former. My example is a single data point; a microcosm of what I experience. So yes, I do think I can make that assumption because it’s actually borne out in practice.

          2. 2

            Good point. Generics tend to infect a lot more code than containers, and they’re hard to stop.

            Additionally, this is a culture issue: most language communities do not prize simplicity of implementation (or interface, for that matter). There’s always a push for generalizing specific solutions (often implemented in libraries) to be more general, necessitating more infrastructure code (e.g. compiler/code generation) to make it happen, be it generics, HKTs, etc.

            Despite Haskell’s powerful type system, it still irks me sometimes. If I want to use recursion schemes, for instance, I have to contort an Expr datatype into Expr e so I can use Fix. (I get why, I just wish I didn’t.)

            1. 1

              There is an alternative, namely, treating recursion schemes as design patterns, rather than reusable libraries. Not only is the syntax prettier, but also the code ends up being shorter the vast majority of the time.

              1. 2

                Can you elaborate a little on this so I can dig up some more study material?

                1. 2

                  Just use Fix and friends as fast construction kits then tie your own knots. Usually I end up with a named newtype like newtype Expr = Expr (ExprF Expr) or something like that.

            2. 0

              If you back up before that point, an expressive type system might encourage you to make something generic even if it doesn’t have to be.

              A type system isn’t a sentient entity, it isn’t supposed to “encourage” you to do anything. You’re given a business problem, you first come up with a solution, and only then do you find a programming language in which this solution can be conveniently expressed. This is how things are supposed to work, unless you are a <language X> consultant, right?

              1. 11

                A type system isn’t a sentient entity, it isn’t supposed to “encourage” you to do anything. You’re given a business problem, you first come up with a solution, and only then do you find a programming language in which this solution can be conveniently expressed. This is how things are supposed to work, unless you are a consultant, right?

                Do you not think that the affordances a language provides tend to influence the style in which programs are written in that language?

                1. 4

                  I do, but that’s in great part because, for most of us, the problem-solving process is:

                  • Choose a technology stack
                  • Design and implement a solution using the chosen stack

                  Whereas I’m suggesting it should be:

                  • Design a solution abstractly, using algorithmic and problem domain concepts
                  • Choose the software stack that best fits your abstract solution
                  • Implement it
                  1. 1

                    I think this is pretty true. We too often overstate how much effort would be involved in using a different software stack and understate how to shoehorn our particular problem into an existing one.

                2. 10

                  That’s a nice theory in a vacuum, but that’s about it. There are many constraints that influence programming language choice. The type system is one of many dimensions that influence that choice.

                  1. 1

                    Sorry, I didn’t mean to emphasize the type system much. The same argument applies to any other language feature: It’s just a tool. Don’t become too wedded to it. First come up with solutions, and only then figure out how to express these solutions in <language X>.

                    1. 3

                      I just feel like your advice is too vague to really be useful. It’s like saying, “Just use the right tool for the job, duh.” Well, sure, who’s going to disagree with that? It’s just not interesting to me. It ignores so much real world stuff. If my company’s code base is all in Go and my next task involves improving some feature that is implemented in Go, am I going to go off and do it in another language? That might entail porting large pieces of code, or introducing process boundaries, or whatever. What are you going to say next? Make it operationally easy to use any language at any time? Keep dreaming. :-)

                      I just feel like your comments are strangely off-topic for this thread.

          1. 2

            I’m really excited about this project. I think there’s a lot of work to be done in building better abstractions for distributed systems, and Lasp (and SyncFree in general) seems like one of the more promising research areas. Keep up the good work, cmeik.

            1. 2

              I complain about every programming language I use. I just find myself complaining about Go the most.

              1. 6

                The answer is never. If you need a generator-like function, you give the generator-like function a callback to call on all the iterables (that can return if it’s done or what not, for break-like behavior).

                Callbacks are always more general. The go developers realized this and were sad about it. I wonder if I can find the mailing list thread when I’m not on my phone, but check out os.Walk in the stdlib.

                That’s the right way. Trying to shoehorn channels here so you can use range is a hack. Someday I’ll write a blog post about how having to spin up a goroutine to service the side of a channel - just so you can use a channel when you could implement it without the goroutine or channel at all - is a huge anti-pattern

                EDIT: can’t find the thread, maybe it was the old Go wiki comments? I’ve looked everywhere, can’t find it.

                1. 8

                  Or model it as an iterator:

                  for s.Scan() {
                      fmt.Println(s.Text())
                  }
                  
                  1. 1

                    yep, another great model. a bit more boilerplate (create the iterator type if you want to support more than one concurrent iteration) but probably the most flexible and easy to use as a user

                    1. 2

                      The primary advantage is that you leave control up to the caller. For callbacks I generally allow a boolean or error return value, so that the callback function can signal that the iteration should be interrupted. However, it seems that some libraries use callbacks without a way to stop iteration (except for panic()).

                  2. 4

                    I think it is worth pointing out explicitly that when you use the Go pattern discussed here you can’t use break to get out of the loop. If you do, the goroutine will not be garbage collected.

                    1. 1

                      Indeed. You have to close the input reader if you want to cancel early. Python generators have the same issue, which is why you can throw an exception into a generator from the outside.

                      1. 1

                        No, Python generators do not have the same problem. If they did, then this interactive session would be leaking memory like crazy, and instead its memory usage remains stable at 8 megs (3.6 resident) for the minute-plus I could be bothered to wait:

                        $ python
                        Python 2.7.3 (default, Mar 14 2014, 11:57:14) 
                        [GCC 4.7.2] on linux2
                        Type "help", "copyright", "credits" or "license" for more information.
                        >>> def x():
                        ...  yield 1
                        ...  yield 2
                        ... 
                        >>> while True:
                        ...  for y in x():
                        ...   break
                        ... 
                        
                        1. 1

                          See the old PEP 325 (from 2003), “Resource-Release Support for Generators”, for what I’m talking about.

                          1. 1

                            That was fixed ten years ago in Python 2.5 with PEP 342, but stalled generators didn’t result in leaking memory in Python the way blocked goroutines do in Golang even before that. It’s just that, until then, finally blocks in generators were not reliable. That’s what PEP 325 and the part of PEP 342 that supplanted it were designed to fix. PEP 342 is the PEP that introduced the .throw method you refer to upthread.

                            But Python has never, not even in 2003, suffered from the problem Golang apparently does.

                            I haven’t done enough in Golang to know why Golang would have this problem; you’d think that it would be safe to garbage-collect a process that’s blocked on a channel that nothing else has a reference to. I guess you could argue that the orphaned goroutine has nobody to report any possible errors to, but that’s already the case!

                            1. 1

                              Yes, PEP 342 helps with throw. What I’m trying to say that finally blocks in generators still aren’t reliable unless you’re careful enough to fully exhaust the generator or throw() an exception into it. It’s similar to how you can force close the input reader to cause a goroutine to exit, in this example.

                              1. 1

                                Even that’s not true. As PEP 342 explains, you can also .close() the generator, which is equivalent to .throw()ing a GeneratorExit into it, and the .__del__() finalizer implicitly invokes .close(), so the only way that a finally or __exit__ can fail to run in a generator is if the generator never gets finalized — if the machine loses power, for example, or you kill -9 the process, or (I think) if the generator is part of a reference cycle.

                                1. 1

                                  That’s a great point. The case I’m worried about is the last one (reference cycle or never garbage collected). Like you said, it would be great if Go had a way to do similar garbage collection by terminating a goroutine that’s not needed.

                                  1. 1

                                    While this is in theory a problem, I’ve never seen it, for these reasons:

                                    1. Most of my generators (I use a lot of generators) don’t have with or finally clauses, because they don’t have side effects that they need to undo.
                                    2. I do exhaust most of my generators.
                                    3. I very rarely have circular references in my generators, and indeed I usually try to avoid circular references in general.
                    2. 3

                      Yep, I’ve been disillusioned by channels and iterators/generators are one of those subtle problems. If you’re not careful, you end up leaking. I’ve found the iterator pattern as coda suggests a much saner approach.

                      Couldn’t have it put better myself @jtolds!

                      1. 3

                        I posted this comment on the article but I’ll reproduce it here:

                        I think “never” is bad advice. The comments in the Reddit thread do a good job of covering the pros and cons of this approach. It’s a set of tradeoffs that depends on what you’re doing, like everything in programming. Sometimes a goroutine/channel is a good fit, sometimes it’s not.

                        1. 1

                          Not to be persistent here, but do you understand my point in my reply to your comment on the article about how a callback interface is always more general? Trivially, you can use a callback-based interface to implement a channel-based one. Just use the callback to send on a channel.

                          I agree “never” is bad advice, but that said 2 will never be greater than 3.

                          1. 1

                            Isn’t that reversible? If you have a channel, just call the callback with everything that comes out of it.

                            1. 1

                              Nope, not reversible. Channels add some unfortunate requirements.

                              1) Channels requires a goroutine servicing the channel, which isn’t trivial overhead unfortunately. This matters most when your API is being called heavily in hotpaths, and it’s frustrating when you’re trying to use an API that creates this overhead for you so you have to do something else.

                              2) Callbacks as discussed in a different thread above make it so you can break out of the iteration, but if it’s a channel you can’t throw a signal back to the generator that it’s time to stop generating. If you stop reading off the channel, unless you have a large enough buffer the producer will block and fail to die, causing resource leaks.

                      1. 4

                        Nice solid post @tylertreat. As a semi-frequent writer of Go code, it really resonated with me.

                        1. 3

                          Thanks. Honestly, I was on the fence about publishing this at all. As I was writing it, it was starting to feel ranty, and I normally try to hold myself to a highish standard. I know it’s going to catch a lot of flack, and there are a lot of similar posts out there. Go is a bit of a love-hate relationship for me. shrug

                        1. 4

                          Do people actually use mutices in Go? Coming from Erlang it is such blasphemy. Use The Channel, Luke.

                          1. 4

                            Yes! Channels are slow. If you’re writing high-performance Go you won’t use them, you’ll use mutexes (or better yet, lock-free). If you look at the standard library, you’ll notice channels are rarely used. Their “API” also has a lot of annoyances.

                            Channels are not great for workload throughput. They are better suited as a mechanism for signaling and timing related code. In general, channels are a coordination pattern. This is where they are really useful and performance is a non-issue. In that regard, I still see them as a nice tool for communication when used for the right job. Buffered channels, on the other hand, are just a less interesting blocking queue.

                            1. 2

                              This is tricky. :) People come to Go from writing database-backed Web apps in Ruby or Python, and from writing highly tuned CPU-bound C and C++ code. “Slow” can mean pretty different things in those worlds.

                              A toy program got about 2m unbuffered sends/s. If you’re anywhere near millions of sends/s, there’s a good chance you want to bundle up work to send (one example, another) or use something other than a channel. If not, you can do whatever’s easiest.

                              As a separate thing, I think folks starting out are sometimes tempted to, in effect, use a channel to build a WaitGroup, mutex, etc., instead of picking the right tool from sync. I was guilty of that in my first Go code. Lots of new Go users could probably use exposure to some channel-y and some non-channel-y examples to see where different approaches can make sense.

                              1. 1

                                Wha?? How can channels be slow? It is like making an OO language where objects are too expensive to use.

                              2. 2

                                No , but it’s there.

                                1. 1

                                  Does Go give any guarantees around two goroutines modifying the same data?

                                  1. 3

                                    No, although if you generally follow the pattern “ownership is passed through channels” (by not holding on to references after putting an item into a channel) it works out ok and you have few or zero places with hand-writen mutexes. goroutines + channels are fast enough that I’ve rarely needed an explicit mutex. The designs that come out of that often look like pipelines or trees that match the program’s actual data flow so it’s pretty nice. I look at the mutexes as being similar to unsafe, the trap door is important but rarely used.

                                    1. 2

                                      Go’s memory model describes all the ways of enforcing the order of concurrent operations.

                                      1. 1

                                        As I understand it, as long as the goroutines pass the data to each other with channels its safe.

                                        (But my Go is limited; someone correct me if I’m wrong)

                                        1. 1

                                          None, but it can be very useful to share read-only memory between multiple goroutines. Plus you can use atomic operations to share mutable state efficiently.