1. 125
    1. 16

      I’m surprised you didn’t talk about learning to use interfaces effectively. To me, this is the most non-intuitive part of Go for most newcomers (including myself at that stage).

      But to comment on your actual examples… The first example you provide demonstrates a concious choice on the part of the Go authors. Of course, it’s anyone’s prerogative to agree or disagree with the merit of that objective. And that object is that Go values ease of readability over ease of writing.

      In that vein, Go is “easy”. It’s obvious, how v is removed, to any casual observer. The cyclomatic complexity is staring you in the face. That is easy, in a sense–the sense the Go creators intended.

      1. 4

        I don’t think that interfaces are really a good example of “it can be quite hard to combine simple concepts to do $useful_stuff”, which is the “deeper issue” here, and also not something that’s unique to Go, or even programming languages: microkernels (or indeed, microservices) kind of suffer from the same issue.

        I do agree interfaces can be tricky to use effectively; but it’s a bit of a different issue than what I wanted to explore.

    2. 15

      I don’t think this is a fair evaluation. Declaring something easy feels very personal anyway. To me, speaking the Polish language is easy.

      The example with list is not comparing which language is easier but which one allows to modify a slice with fewer characters. Is using single characters to represent operations in VimScript making the VimScript a simple language?

      Funny that the above argument is used in the next example against Go. Concurrency can be started with a single keyword, but this is complexity, because look how hard it can be. I believe what that example proves is that concurrency is hard and that the language does not abstract it away to hide the complexity of concurrency.

      I wonder if the author would consider the lack of exceptions or generics as a sign of simplicity or complexity.

      With this judgement, is Python a simple language? It has list.remove and list.pop. It supports asynchronous programming with async and await (and few other ways as well!). It has threading and multiprocessing packages in the standard library as well.

      Python even has a much-needed operator for matrix multiplicatoin. I bet in Ruby you have to use a old-school method or piggyback on some other operator! :P

    3. 25

      Go gives you the tools for concurrency, but doesn’t make them terribly easy - except to misuse.

      Compare Erlang, which does make concurrency an essential part of its worldview, and makes building those systems less hassle-prone than Go.

      1. 9

        They use different models for concurrency, with different tradeoffs and different expressive power, but I don’t think Erlang or Go is generally better or worse than the other with regards to concurrency.

      2. 5

        Could you give us an example? I’m not familiar with Erlang

        1. 5

          Erlang concurrency is in two models: a low-level process (green thread, no shared state) that can send and receive messages using imperative code, and one of several high-level behaviors that encapsulate the error-prone sending and receiving semantics so the developer only has to implement a handful of callbacks that handle incoming messages that may or may not change the server’s state and may or may not expect a reply.

          In a chat server, you’d model a connection as a gen_server or gen_statem that receives outgoing messages from TCP and incoming messages as a cast from a channel, and a channel as a gen_server that relays incoming messages to channel members.

          http://erlang.org/doc/design_principles/gen_server_concepts.html has some specific information about behaviors.

          https://learnyousomeerlang.com/the-hitchhikers-guide-to-concurrency is the first of several chapters in Fred Hébert’s excellent book about the low-level concurrency, and https://learnyousomeerlang.com/what-is-otp is the first of several about the high-level behavior-based model that I’m most familiar with.

    4. 12

      Will the example not run 21 jobs? It also appears to be not robust in the face of e.g. the last-started job finishing faster than an earlier one, and signalling early. (Which really just goes to prove the point of the article …)

      1. 8

        Yes - typically you would use https://golang.org/pkg/sync/#WaitGroup to check if every job had completed.

      2. 7

        Ehm, yeah; you’re right 🤦 Execution order wasn’t really a requirement though. I just wanted a minimal as possible example to show “run 20 jobs in total, but only 3 at the same time”.

    5. 10

      I don’t really like Go, and have only done like …. 5 pages of it ever. But I feel like that goroutine example is pretty convincing that Go is easy? The amount of futzing about I have to do in Python to get a similar-working pipeline would lead to maybe 3x as much code for it to be that clean.

      I dunno, Go reintroduces C’s abstraction ceiling problem meaning it’s hard for someone to show up and offer nicer wrapper for common patterns. But if you’re operating at the “message passing” and task spinup level, you’re gonna have to do flow control, and that code looks very nice (and local, which is a bit of a godsend in this kind of stuff).

      Though I feel you for the list examples. Rust (which, granted, has more obstacles to doing this) also involves a lot of messing around when trying to do pretty simple things. At least it’s not C++ iterators I guess

      1. 18
        import asyncio
        async def do_work(semaphore, id):
            async with semaphore:
                # do work
                await asyncio.sleep(1)
        async def run():
            semaphore = asyncio.Semaphore(3)
            jobs = []
            for x in range(20):
                jobs.append(do_work(semaphore, x))
            await asyncio.gather(*jobs)

        About the same length in lines, but IMO quite a bit easier to write.

        1. 4

          That is a good example but you can’t really compare asyncio and go routines. The latter are more like “mini threads” and don’t need to inherit all the “ceremony” that asyncio needs to prevent locking up the io loop.

          1. 12

            Goroutines are arguably worse IMO. They can be both run on the same and a different thread, which makes you think about the implications of both. But here’s an a bit more wordy solution with regular Python threads which can be more comparable:

            import threading
            import time
            def do_work(semaphore, id):
                with semaphore:
                    # do work
            def run():
                semaphore = threading.Semaphore(3)
                threads = []
                for x in range(20):
                    thread = threading.Thread(target=do_work, args=(semaphore, x))
                for thread in threads:

            As you can see, not much has changed in the semantics, just the wording changed and I had to manually join the threads due to a lack of a helper in the standard library. I could probably easily modify this to work with the multiprocessing library as well, but I’m not gonna bother.

            Edit: I did bother. It was way too easy.

            import multiprocessing as mp
            import time
            def do_work(semaphore, id):
                with semaphore:
                    # do work
            def run():
                semaphore = mp.Semaphore(3)
                processes = []
                for x in range(20):
                    process = mp.Process(target=do_work, args=(semaphore, x))
                for process in processes:
            1. 10

              goroutines aren’t just good for async I/O. They also work well for parallelism.

              Python’s multiprocessing module only works well for parallelism is basic cases. I’ve written a lot of Python and a lot of Go. When it comes to writing parallel programs, Go and Python are in different categories.

              1. 4

                It’s best to decide which one do you actually want. If you’ll try to reap benefits of both event loops and thread parallelism, you’ll have to deal with the disadvantages of both. Generally, you should be able to reason about this and separate those concerns into separate tasks to be able to separate your concerns. Python has decent support for that, with asyncio supporting running functions in threadpool or processpool executors.

                I do agree though that Python isn’t the best at parallelism, because it carries quite a lot of historical baggage. When it’s threading was being designed in 1998, computers with multiple CPU’s where rare, and the first multi-core CPU was still 3 years away[1], with consumer multi-core CPU’s arriving 7 years later. The intention for it was to allow multiple tasks to run seemingly concurrently for the programmer on a single CPU and speed up IO operations. At the time, the common opinion was that most of the things will continue to have only a single core, so the threading module was designed appropriately for the language, with a GIL, giving it safety that it will not corrupt the memory. Sadly the things didn’t turn out how they initially thought they would, and now we have a GIL problem on our hands that is very difficult to solve. It’s not unlike the errno in C, which now requires macro hacks to correctly work between threads. Just that GIL touches things that are a bit harder to hack over.

                1. 7

                  I’m aware of the history. My point is that the Python code you’ve presented is not a great comparison point because it’s comparing apples and oranges in a substantial way. In the Go program, “do work” might be a CPU-bound task that utilizes shared mutable memory and synchronizes with other goroutines. If you try that in Python, it’s likely you’re going to have a bad time.

                  1. 3

                    The example with multiprocessing module works just fine for CPU tasks. asyncio works great for synchronization and sharing memory. You just mix and match depending on your problem. It is quite easy to deffer CPU heavy or blocking IO tasks to an appropriate executor with asyncio. It forces you to better separate your code. And in this way, you only need to deal with one type of concurrency at a time. Goroutines mashes them together, leaving you to deal with thread problems where coroutines would have worked just fine, and coroutine problems, where threads would have worked just fine. In go you only have a flathead scredriwer for everything betweem nails and crosshead screws. It surely works, sometimes even well. But you have to deal with warts of trying to do everything with one tool. On the other hand, Python tries to give you a tool for most situations.

                    1. 6

                      The example with multiprocessing module works just fine for CPU tasks.

                      But not when you want to add synchronization on shared mutable memory. That’s my only point. You keep trying to suck me into some larger abstract conversation about flat-head screwdrivers, but that’s not my point. My point is that your example comparison is a bad one.

                      1. 3

                        Give me an example of a task of that nature that cannot be solved using multiprocessing and asyncio and I’ll show you how to solve it. You shouldn’t try to use single tool for everything - every job has it’s tools, and you might need more than one to do it well.

                        1. 4

                          I did. Parallelism with synchronized shared writable memory is specifically problematic for multiprocessing. If you now also need to combine it with asyncio, then the simplicity of your code goes away. But Go code remains simple.

                          You shouldn’t try to use single tool for everything

                          If you think I need to hear this, then I think this conversation is probably over.

                          1. 2

                            Parallelism with synchronized shared writable memory

                            You describe a class of problems. But I cannot solve a class of problems without knowing at least one concrete problem from the class. And I do not.

                            1. 4

                              Here’s an example of something I was trying to do yesterday:

                              I wanted to use multiprocessing to have multiple workers pull (CPU-bound) tasks off a (shared) priority queue, process each task in a way that generates zero or more new tasks (with priorities) and put them back on the queue.

                              multiprocessing.Manager has a shared Queue class, but not a shared priority queue, and I couldn’t figure out a way to make it work, and eventually I gave up. (I tried using heapq with a shared multiprocessing.list and that didn’t work.)

                              If you can tell me how to solve this, I would actually be pretty grateful.

                              1. 1

                                I gave it a bit of time today, here’s the result. Works decently well, if you don’t do CPU expensive stuff (like printing big numbers) in the main process and your jobs aren’t very short.

        2. [Comment removed by author]

          1. 5

            As if this didn’t sit on an even larger Linux kernel with preemptive threading which means that the work could trash it’s memory if it looks at it in a wrong way /s. Of course, there are tradeoffs, and it’s everyone’s choice which ones to choose. But it doesn’t look like this is limiting the maximum number of currently running jobs to 3, which is the original goal of the code.

    6. 13

      Hear hear!

      This is exactly the kind of low slung cognitive hurdle I find really turns me off to any programming language.

      I think part of my dis-taste stems from the fact that this language was designed so very recently in history of programming languages terms.

      It’s easy to understand why a language like C has some aspects of its syntax that might not feel immediately natural to modern day programmers, it came into its own 30 years ago on radically different hardware than what we run today.

      Go doesn’t have that excuse. I feel like its designers revel in this class of abstraction fail - mostly because they’ve been using C forever and their choices don’t present roadblocks to them.

      Make no mistake, enjoying the experience of using one programming language over another is an inherently subjective thing. For my meager brain and the problems I want and need to solve, I will always choose a programming language that presents a higher level of abstraction to me as a programmer.

      Thanks for writing this!

      1. 3

        Do you think there is any truth to the following statement?

        Abstraction solves all problems except for the problem of abstraction.

        If so, do you think this is a problem worth addressing? If so, how? Are there different ways to address it than you would that don’t involve “reveling in abstraction fail”?

        1. 5

          I don’t see abstraction as a problem. I see it as a solution.

          If you’re referring to the performance penalties abstraction tends to impose, then choosing a tool with a lower level of abstraction might well make sense.

          So, tool to task, and then no abstraction problem :)

          1. 9

            If abstraction is never viewed as a problem to you, then you likely don’t share some very basic assumptions made by both myself and likely the designers of Go. But maybe that’s not as fun as snubbing your nose and talking about how instead they “revel in abstraction fail.” And then further go on to speculate as to why they think that way.

            1. 7

              Your point about shared assumptions is very true.

              I tend to enjoy solving problems that aren’t particularly performance intensive, and I enjoy working at a very high level of abstraction.

              However clearly there’s another viewpoint that deserves to be heard here, so I’ll try to hunt down some resources around the points you’re making. Actually I just found this Wikipedia article which gives this a bit more flavor and I think I understand it better. Abstraction (phrased as indirection in the article) can cause you to think you’re solving hard problems when in reality you’re just moving them around.

              Also I apologize if my poor chose of words offended you. I didn’t intend to snub my nose at anything, and I should have been more careful to strictly personalize the point I was making.

              1. 10

                Thanks. No apology necessary, I’d just love if we could talk about PL design in terms of trade offs instead of being so reductionist/elitist.

                I don’t think your Wikipedia article quite does the idea I’m trying to get across justice. So I’ll elaborate a bit more. The essential idea of “abstraction solves all problems except for the problem of abstraction” is that abstraction, can itself, be a problem. The pithy quote doesn’t say what kind of problem, and I think that’s intentional: abstraction can be at the root of many sorts of problems. You’ve identified one of them, absolutely, but I think there are more. Principally on my mind is comprehension.

                The problem with saying that abstraction can cause comprehension problems is that abstraction can also be the solution to comprehension problems. It’s a delicate balance of greys, not an absolutist statement. This is why the “herp derp Go designers have no excuse for ignoring 30 years of PL advancement” is just such a bad take. It assumes PL design has itself advanced, that there’s no or little room for reasonable disagreement because “well they wrote C for so long that their brains have rotted.” Sure, you didn’t say that exact thing, but that’s the implication I took away from your words.

                Do you think all abstractions are created equal? I don’t. I think some are better than others. And of course it depends on what you’re trying to do. Some abstractions make reading the code easier. Some abstractions can make it harder, especially if the abstraction is more powerful than what is actually necessary.

                There’s another pithy saying, “No abstraction is better than the wrong abstraction.” This gets at another problem with abstraction which is that you can build an abstraction to tightly coupled to a problem space, and when that problem space expands or changes, sometimes fixing a bad abstraction is harder than fixing code that never had the abstraction in the first place. Based on this idea, you might, for example, decide to copy a little code instead of adhering to DRY and building an abstraction.

                Performance is probably the biggest problem I have with abstraction on a day-to-day basis, but it seems like that’s one you’re already familiar with.

                There are many ways to tackle the problems of abstraction. Go is perhaps on the more extreme end of the spectrum, but IMO, it’s a very reasonable one. If you believe one of the primary motivations for problematic abstractions is the tools available to build abstractions, for example, then a very valid and very reasonable solution to that would be to restrict the tooling. This is, for example, the primary reason why I’ve long been opposed to adding monads to Rust.

                I don’t believe this is just limited to programming languages either. The comprehension problem with abstraction can be seen everywhere. When conversing with someone, or reading a book or even in educational materials, you’ll often see or hear a general statement, and that’s quickly followed up with examples. Because examples are concrete and help us reason through the abstraction. Of course, the extent to which examples are helpful or even necessary depends on the person. I’ve met plenty of people who have an uncanny ability to stay in abstraction land without ever coming down on to something concrete. I’m not one of them, but I know they exist.

                1. 7

                  Thank you very much for taking the time to write this up.

                  Let’s hope I manage to actually learn from this and not succumb to knee jerk mis-scoped reactions in the future.

                  What’s interesting is that I have a very different reaction to Rust. I feel like Rust brings some very new ideas to the table. It offers a mix of very high and very low level abstractions, but in general offers a much lower level of abstraction than the programming languages I work in regularly and am most familiar with - Python & Ruby.

                  I think part of my attitude towards Go is rooted in my attitude towards C. It’s an incredible and venerable tool, but one I have stubbed my toe on enough that I find it painful and not enjoyable, and that has perhaps unjustly tainted my perceptions WRT Golang.

                  1. 5

                    Here are some opinions I’ve written on Rust vs. Go, although mostly about Go’s poor abstraction facilities and how they inhibit me. https://users.rust-lang.org/t/what-made-you-choose-rust-over-go/37828/7

                    1. 2

                      This seems like it would be an interesting read, but unfortunately it doesn’t seem to load?

                      1. 5

                        Works for me. I’ve copied it below.

                        I’ve been writing Go ~daily since before 1.0 came out. I’ve been writing Rust ~daily also since before its 1.0 release. I still do. I primarily write Go at work and Rust in my free time these days, although I sometimes write Rust at work and sometimes write Go in my free time.

                        Go’s goals actually deeply resonate with me. I very much appreciate its fast compilation times, opinionated and reasonably orthogonal design and the simplicity of the overall tooling and language. “simplicity” is an overloaded term that folks love to get pedantic about, so I’ll just say that I’m using it in this context to refer to the number of different language constructs one needs to understand, and invariably, how long it takes (or how difficult it is) for someone to start becoming productive in the language.

                        I actually try hard to blend some of those goals that I find appealing into Rust code that I write. It can be very difficult at times. In general, I do my best to avoid writing generic code unless it’s well motivated, or manifests in a way that represents a common pattern among all Rust code. My personal belief is that Go’s lack of generics lends considerably to its simplicity. Writing in Go code heavily biases towards less generic code, and thus, more purpose driven code and less YAGNI code. I don’t mean this to be a flippant comment; the best of us succumb to writing YAGNI code.

                        So if I like Go’s goals so much, why do I use Rust? I think I can boil it down to two primary things.

                        The first is obvious: performance and control. The projects I tend to take on in my free time bias towards libraries or applications that materially benefit from as much performance tuning as you want. Go has limits here that just cannot be breached at all, or cannot be breached without sacrificing something meaningful (such as code complexity). GC is certainly part of this, and not just the GC itself, but the affects that GC has on the rest of your code, such as memory barriers. Go just makes it too hard to get optimal codegen in too many cases. And this is why performance critical routines in Go’s standard library are written in Assembly. I don’t mind writing Assembly when I have to, but I don’t want to do it as frequently as I would have to in Go. In Rust, I don’t have to.

                        The second reason is harder to express, but the most succinct way I can put it is this: Go punishes you more than Rust does when you try to encapsulate things. This is a nuanced view that is hard to appreciate if you haven’t used both languages in anger. The principle problems with Go echo a lot of the lower effort criticism of the language. My goal here is to tie them to meaningful problems that I hit in practice. But in summary:

                        • The lack of parametric polymorphism in Go makes it hard to build reusable abstractions, even when those abstractions don’t add too much complexity to the code. The one I miss the most here is probably Option<T>. In Go, one often uses *T instead as a work-around, but this isn’t always desirable or convenient.
                        • The lack of a first class iteration protocol. In Rust, the for loop works with anything that implements IntoIterator. You can define your own types that define their own iteration. Go has no such thing. Instead, its for loop is only defined to work on a limited set of built-in types. Go does have conventions for defining iterators, but there is no protocol and they cannot use the for loop.
                        • Default values. I have a love-hate relationship with default values. On the one hand, they give Go its character and make a lot of things convenient. But on the other hand, they defeat attempts at encapsulation. They also make other things annoying, such as preventing compilation errors in struct literals when a new field is added. (You can avoid this by using positional struct literal syntax, but nobody wants to do that because of the huge sacrifice in readability.)

                        So how do these things hamper encapsulation? The first two are pretty easy to exemplify. Consider what happens if you want to define your own map type in Go. Hell, it doesn’t even have to be generic. But maybe you want to enforce some invariant about it, or store some extra data with it. e.g., Perhaps you want to build an ordered map using generics. Or for the non-generics case, maybe you want to build a map that is only permitted to store certain keys. Either way you slice it, this map is going to be a second class citizen to Go’s normal map type:

                        • You can’t reuse the mymap[key] syntax.
                        • You can’t reuse the for key, value := range mymap { construct.
                        • You can’t reuse the value, ok := mymap[key] syntax.

                        Instead, you wind up needing to define methods for all of these things. It’s not a huge deal, but now your map looks different from most other maps in your program. Even Go’s standard library sync.Map suffers from this. The icing on the cake is that it’s not type safe because it achieves generics by using the equivalent of Rust’s Any type and forcing callers to perform reflection/type conversions.

                        Default values are a completely different beast. Their presence basically makes it impossible to reason conclusively about the invariants of a type. Say for example you define an exported struct with hidden member fields:

                        type Foo struct {
                            // assume there are important invariants
                            // that relate these three values
                            a, b, c int
                        func NewFoo() Foo { ... }
                        func (f *Foo) DoSomething() { ... }

                        Naively, you might assume that the only way to build Foo is with NewFoo. And that the only way to mutate its data is by calling methods on Foo such as DoSomething. But this isn’t quite the full story, since callers can write this:

                        var foo Foo

                        Which means the Foo type now exists as a default value where all of its component members are also default values. This in turn implies that any invariant you might want to come up with for the data inside Foo must account for that component’s default values. In many cases, this isn’t a huge deal, but it can be a real pain. To the point where I often feel punished for trying to hide data.

                        You can get a reprieve from this by using an unexported type, but that type cannot appear anywhere (recursively) in an exported type. At least in that case, you can be guaranteed to see all such constructions of that type in the same package.

                        None of these things are problems in Rust. YMMV over how much you consider the above things to be problems, but I’ve personally found them to be the things that annoy me the most in Go. An honorary mention is of course sum types, and in particular, exhaustive checks on case analysis. I’ve actually tried to solve this problem to some extent, but it’s not ideal for a number of reasons because of how heavyweight it is: https://github.com/BurntSushi/go-sumtype — From my perspective, the lack of sum types just means you can move fewer invariants into the type system. For me personally, sum types are a huge part of my day-to-day Rust coding and greatly lend to its readability. Being able to explicitly enumerate the mutually exclusive states of your data is a huge boon.

                        Anyway, Rust has downsides too. But I’m tired of typing. My main pain points with Rust are: 1) complexity of language features, 2) churn of language/library evolution and 3) compile times. Go suffers from basically none of those things, which is pretty nice.

                2. 0

                  It assumes PL design has itself advanced

                  I think it has in some ways, but it’s as much in figuring out which abstractions are valuable as it is with expanding the scope of what abstractions we can express.

                  And I think Go provided a really solid catalyst for a lot of good PLT and developer experience improvements in newer programming languages.

        2. 3

          I think the answer is to have a low level kernel and build up the rest of the language from lower abstraction primitives. Present the highest level layer as the default interface, but provide access to the primitives.

          For example, python has IO buffers that it uses to open files, but you normally don’t need to access them to do file work. If you are doing something a bit weird you can drop down to that level.

    7. [Comment removed by author]

    8. 9

      The nice thing about Go is that when it is verbose, like for the list deletion example, it highlights that the computer is doing more work. If you are constantly deleting items in the middle of an array (or if you’re doing this at all), it might not be the best choice of data structure.

      1. 31

        This sounds like a post hoc rationalization to me. You can always hide arbitrary computations behind a function call.

        I believe this is explained by the lack of generics, and I predict that, if/when generics are implemented, go stdlib will gain a bunch of slice-manipulating functions a-la reverse.

        1. 2

          Which is probably one of the stronger arguments for genetics.

        2. 2

          This sounds like a post hoc rationalization to me.

          This was always pretty reliably cited as the motivation for a lot of design decisions, back when Go was just released.

          You can always hide arbitrary computations behind a function call.

          Yes, but one nice thing about Go is that functions are essentially the only way to hide arbitrary computation. (And, relatedly, the only mechanism that Go has to build abstraction.) When you’re reading code, you know that a + b isn’t hiding some O(n²) bugbear. That’s valuable.

          1. 3

            Yup, I agree with the general notion that reducing expressive power is good, because it makes the overall ecosystem simpler.

            But the specific example with list manipulation is “a wrong proof of the right theorem”, and this is what i object to.

            Incidentally, a + b example feels like a wrong proof to me as well, but for a different reason. In Go, + can also mean string concatenation, so it can be costly, and doing + in a linear loop can be accidentally quadratic. (I don’t write Go, might be wrong about this one).

            1. 1

              In Go, + can also mean string concatenation, so it can be costly

              How do you mean? String concatenation is O(1)…

              doing + in a linear loop can be accidentally quadratic. (I don’t write Go, might be wrong about this one).

              How’s that?

              1. 2

                String concatenation is O(1)…

                Hm, I don’t think that’s the case, seem to be O(N) here:

                λ bat -p main.go 
                package main
                import (
                func main() {
                    for p := 0; p < 10; p++ {
                        l := 1000 * (1 << p)
                        start := time.Now()
                        s := ""
                        for i := 0; i < l; i++ {
                            s += "a"
                        elapsed := time.Since(start)
                        fmt.Println(len(s), elapsed)
                λ go build main.go && ./main 
                1000 199.1µs
                2000 505.49µs
                4000 1.77099ms
                8000 3.914871ms
                16000 14.675162ms
                32000 49.782358ms
                64000 182.127808ms
                128000 661.137303ms
                256000 2.707553408s
                512000 11.147772027s
                1. 3

                  You’re right! O(N). Mea culpa.

      2. 41

        it highlights that the computer is doing more work

        It seems strange to me to trust in the ritual of doing strange and obnoxius, bug-prone, programming to reflect the toil the computer will be burdened with. Doubly strange when append() is generic magical go function built-in which can’t be implemented in Go so it’s hard to even say, without disassembly and studying the compiler, what the code would even end up doing at runtime; I guess we can make very good educated guesses.

      3. 20

        IMHO, no performance concerns are valid* until a profiler is run. If it wasn’t run, the program was fast enough to begin with, and if it was, you’ll find out where you actually have to optimise.

        • Exception: if there are two equally readable/maintanable ways to write code, and one is faster than the other, prefer the faster version. Otherwise, write readable code and optimise when/if needed.
        1. 4

          I feel this statement is a bit too generic, even with the exception. Especially when you’re talking about generic stdlib(-ish) functions that may be used in a wide variety of use cases, I think it makes sense to preëmptively think about performance.

          This doesn’t mean you should always go for the fastest possible performance, but I think it’s reasonable to assume that sooner or later (and probably sooner) someone is going to hit some performance issues if you just write easy/slow implementations that may be “fast enough” for some cases, but not for others.

      4. 18

        But I want the computer to be doing the work, not me as human source code reader or, worse, source code writer.

        1. 1

          Sure, but a language that hides what is computationally intensive or not, is worse.

          1. 24

            Source code verbosity is poorly correlated with runtime cost, e.g. bubble sort code is shorter and simpler than other sorting algorithms.

            Even in the case of removing items from arrays, if you have many items to remove, you can write a tiny loop with quadratic performance, or write longer, more complex code that does extra bookkeeping to filter items in one pass.

          2. 6

            Then don’t build your language to hide what’s computationally expensive. Fortunately, most languages don’t do any sort of hiding; folding something like list deletion into a single function call is not hiding it - so this statement isn’t relevant to the discussion about Go (even if true in a vacuum). Any function or language construct can contain any arbitrary amount of computational work - the only way to know what’s actually going on is to read the docs, look at the compiler, and/or inspect the compiled code. And, like @kornel stated, “Source code verbosity is poorly correlated with runtime cost”.

        2. 1

          As a user, I don’t want the computer to be doing the work. It reduces battery life.

          1. 10

            If you’re referring to compilation or code-generation - that work is trivial. If you’re referring to unnecessary computation hidden in function calls - writing the code by hand is an incredibly bad solution for this. The correct solutions include writing efficient code, writing good docs on code to include performance characteristics, and reading the function source code and/or documentation to check on the performance characteristics.

            As @matklad said, “You can always hide arbitrary computations behind a function call.”e

    9. 5

      Concurrency in Go has bitten me so many times, but after getting the hang of channels, WaitGroups, and when to mix in more traditional concurrency primitives like mutexes, it’s also helped me in other languages - the async frameworks in Rust often use mpsc (multi-producer, single-consumer) channels which work similarly to Go’s channels. The idea of avoiding starting threads for every separate task has helped for performance in Rust as well.

      But none of those are easy concepts. Concurrency is not simple or easy to reason about and race conditions will often pop up when you least expect. I agree with your idea that Go is not an easy language to use effectively or master, but I would argue that the learning curve is still lower than something like C or C++ or even Rust.

      As a corollary to all of the above; learning the language isn’t just about learning the syntax to write your ifs and fors; it’s about learning a way of thinking.

      I love that quote. There’s definitely a certain something missing when someone unfamiliar with Go writes Go code… and some of it is hard to put into words. It’s the same with most languages too - Rust, Python, Lisp, C, etc.

      It works a little bit differently, but this is probably how I would have accomplished the limited workers example: https://play.golang.org/p/knuktiN0jIs The downside here is that if a task fails it has a chance to kill an entire worker. The upside is that you only start 3 goroutines rather than n.

    10. 5

      Every language carries some sort of minimal level of background knowledge to use it effectively. For JavaScript, it might be async and promise. For Java it might be annotation processing. For Lisp, it might be functional programming concepts.

      Imagine Go being a language whose standard libraries are full of abstractions like Either, Result, Maybe, Option, and Some. It would be even more on the “not simple” side. Imagine Go being a language without type checking. It would be even harder to understand a large program. Imagine Go being a language which a myriad of build frameworks like Maven, SBT, and Grails. That would be even more difficult for a new comer to pick up and learn. Imagine Go being a language whose object properties may be lazily evaluated but it not easy to determine from code, that could certainly be harder for a new comer to understand.

      On a spectrum of easy to get right and frustrating to pick up and learn, Go can do a lot worse.

      1. 11

        I would say that Either, Result, Maybe, Option, and Some would make it easier, since then you would need to learn them only once, instead of once per codebase. This is the old “patterns” discussion over again.

        1. 2

          I respect your opinion that having these abstractions make Go easier. I certainly wish I can use high order types in Go code in order to reduce boilerplate. The whole return (retval, error) pattern seems like a hack around not having sum types. At the same time, I think this particular view is a subjective one. Like any design decision, this has its pros and cons.

          For people who are already familiar with these concepts and can easily differentiate the nuance between an Either and Result, it is a simplifying decision. For people who are not familiar with these abstractions, it would be something they’d have to learn first before they can use the language. There is a cost here.

          My point is that it’s really not fair to judge a programming language on a single dimension of simple vs hard. It is more fair to look at each programming language for the trade-offs the designers made. While I cannot speak on behalf of the Go designers, I think they have made certain things more tedious in order to achieve simplicity in other areas.

          1. 1

            For people not familiar with an Option or Either type, they would need to learn before interacting with a codebase which makes use of them. The same goes for any codebase now that uses home-made replacements for those, but now that knowledge is not transferrable to the next codebase, so the newcomer will have to squint hard and pattern-match to see that “oh, this Maybe type is essentially an Option with different semantics”.

    11. 3

      Translating the goroutine example into Monte, it took me three tries to get it working:

      var i := 0
      def done
      def go():
          if (i >= 20) { bind done := true } else {
              traceln(i += 1)
      for _ in (0..!3):
      when (done) -> { traceln("all done") }

      Half the code and no need to artificially sleep. Monte syntax does not offer special help here; E would only need a couple extra braces. I think that “less easy” is a very apt way to describe the extra effort that Go requires of its users.

      1. 1

        What’s happening with def done? I see variable declaration syntax above it. It looks like a function. What is def doing?

        1. 2

          Yeah, E can be hard to read. Original E used def for even more things!

          In def done without a RHS, we are creating a promise. When we eventually bind done := true, we are resolving the promise for the name done with the value true. This is analogous to the original Go code, which used a channel of Boolean values to signal doneness.

          In def go(), we are creating a function. We are asynchronously calling this function with go<-(), which queues the function to be called later and returns a promise for the result. We’re discarding the results here, though. Since we call go<-() three times in a loop, we’ll have three workers running. Each worker closes over the same i variable and increments it, then passes its control on to the next iteration.

          Finally, a when-expression like when (done) -> { ... } waits until the given promise is resolved, and then runs the given block, returning a promise for the entire delayed action. This gives us a very flexible way to enqueue any amount of exotic control flow, representing each intermediate result as a promise.

    12. 3

      Look how is it is to start a goroutine

      I think this is a typo

    13. 2

      It’s nice as a little, well, tour to get a bit of a feel of the language and see how it roughly works and what it can roughly do, but it’s ill-suited to actually learn the language.

      Any recommended material on getting started with Go?

      1. 6

        The Go Programming Language book is quite good.

    14. 2

      Depends on what you are comparing with. Most people who are engrained with fork/wait or pthread_creat/join aren’t complaining.

    15. 1

      I find rust easier (but not simpler). Just get it compiling then it should work, easy right? No footguns here and there.