1. 72
  1.  

  2. 24

    Upgrading @golang versions is actually a pleasurable task for me:

    1. I’m 99% sure nothing will break.
    2. Speedups of 5-10% are common.
    3. New compiler or vet warnings tell me how to improve my code.
    4. Excellent release notes.

    Does any other language get this as right?

    1. 7

      Go’s secret sauce is that they never† break BC. There’s nothing else where you can just throw it into production like that because you don’t need to check for deprecations and warnings first.

      † That said, 1.17 actually did break BC for security reasons. If you were interpreting URL query parameters so that ?a=1&b=2 and ?a=1;b=2 are the same, that’s broken now because they removed support for semicolons for security reasons. Seems like the right call, but definitely one of the few times where you could get bitten by Go.

      Another issue is that the language and standard library has a compatibility guarantee, but the build tool does not, so e.g. if you didn’t move to modules, that can bite you. Still, compared to Python and Node, it’s a breath of fresh air.

      1. 2

        I’ve been upgrading since 1.8 or so. There have been (rarely) upgrades that broke my code, but it was always for a good reason and easy to fix. None in recent memory.

        1. 1

          Are semicolons between query params a common practice? I’ve never heard of this before.

          1. 2

            No, which is why they removed it. It was in an RFC which was why it was added in the first place.

          2. 1

            1.16 or 1.15 also broke backwards compatibility with the TLS ServerName thing.

          3. 4

            Java is damn good about backward compatibility.

            From what I recall, their release notes are pretty good as well.

            1. 3

              I had a different experience, going from Java 8 to Java 11 broke countless libraries for me. Especially bad is that they often break at run- and not at compile time.

              1. 2

                As someone with just a little experience with Go, what’s the situation with dependencies? In Java and maven, it becomes a nightmare with exclusions when one wants to upgrade a dependency, as transitive dependencies might then clash.

                1. 3

                  It’s a bit complicated, but the TL;DR is that Go 1.11 (this is 1.17, recall) introduced “modules” which is the blessed package management system. It’s based on URLs (although weirdly, it’s github.com, not com.github, hmm…) that tell the system where to download external modules. The modules are versioned by git tags (or equivalent for non-git SCMs). Your package can list the minimum versions of external packages it wants and also hardcode replacement versions if you need to fork something. The expectation is that if you need to break BC as a library author, you will publish your package with a new URL, typically by adding v2 or whatever to the end of your existing URL. Package users can import both github.com/user/pkg/v1 and github.com/user/pkg/v2 into the same program and it will run both, but if you want e.g. both v1 and v1.5 in the same application, you’re SOL. It’s extremely opinionated in that regard, but I haven’t run into any problems with it.

                  Part of the backstory is that before Go modules, you were just expected to never break BC as a library author because there was no way to signal it downstream. When they switched to modules, Russ Cox basically tried to preserve that property by requiring URL changes for new versions.

                  1. 2

                    The module name and package ImportPath are not required to be URLs. Them being a URL is overloading done by go get. Nothing in the language spec requires them to be URLs.

                    1. 2

                      Yes, but I said “TL;DR” so I had to simplify.

                  2. 2

                    I also have only a little experience with Go. I have not yet run into frustrations with dependencies via Go modules.

                    Russ Cox gave a number of great articles talking about how Go’s dependency management solves problems with transitive dependencies. I recall this one being very good (https://research.swtch.com/vgo-import). It also calls out a constraint that programmers must follow:

                    In Go, if an old package and a new package have the same import path, the new package must be backwards compatible with the old package.

                    Is this constraint realistic and followed by library authors? If not, you’re going to run into problems with Go modules.

                    I’ve run into dependency hell in: Java, JavaScript, Python, and PHP – In every programming language I’ve had to do major development in. It’s a hard problem to solve!

                      1. 1

                        Is this constraint realistic and followed by library authors? If not, you’re going to run into problems with Go modules.

                        It is (obviously) not realistic for most software produced in the world.

                    1. 1

                      I strongly agree. The first time major stuff broke was Java 9, which is exceedingly recent, and wasn’t an LTS. And that movement has more in common with the Go 2 work than anything else, especially as Java 8 continues to be fully supported.

                  3. 2

                    Not even mentioned in the blog post, but really nice for certain users, is the addition of cgo.Handle.

                    Way back in Go 1.4, you could get away with passing a pointer to any kind of Go object to a C function, and the C code could retain that pointer (as long as the object pointed to was referenced somewhere else so it didn’t get GC’d, of course). This was thoroughly useful if you wanted to give some C library a Go function as a callback, and let it pass some Go data back to that Go callback.

                    Go 1.5’s concurrent garbage collector broke all of that, and after a period of confusion where it wasn’t obvious what you could do, 1.6 shipped the cgo pointer passing rules, which, among other things, include no retention. You can pass a Go pointer to a C function and it will be safe for the duration, but once that call returns, the pointer is a hot potato.

                    Since then, if you wanted to store some Go data as a callback payload, you would have to, for instance, store your data in a Go map under a unique key, pass that key to your C library, and have your callback functions take the key and fetch the real data from the map (which additionally needs to be protected by a mutex, or be a sync.Map… which itself wasn’t introduced until Go 1.9).

                    In 1.17 you can just do cgo.NewHandle(foo) and get back a pointer-sized int which can be passed to C and retained, and cgo.Handle(bar).Value() to fetch the original thing back. It’s using a sync.Map under the hood, so it’s really the same trick, but now you don’t have to manage the storage yourself; you can just ask the runtime “hey, put this in a box for me”, and open the box later.