1. 3
  1.  

  2. 4

    Sounds like the typical “thing X is impossible, because there was only 20 years of experience doing X successfully in 1960” which seems to be common in the Go sphere.

    1. 4

      It is rather frustrating that the fallback in all criticism of Go is “Well the authors could not find a solution they liked, so this is a good thing”. Well, no, not necessarily.

      1. 3

        The thing most people seem to overlook in criticisms and counter-criticisms of Go are Golang’s compatibility promises.

        To extend your comment: it’s a good thing “because once thing X ends up in Go, X’s API gets frozen and X can’t get breaking changes until Go 2 comes out.” That means the authors have to find a solution they like and will continue to support.

        1. 3

          It doesn’t apply in this context:

          Finally, the Go tool chain (compilers, linkers, build tools, and so on) are under active development and may change behavior. This means, for instance, that scripts that depend on the location and properties of the tools may be broken by a point release.

          1. 2

            Absolutely agreed in this case - which is why I noted in my comment below that a batteries-included approach would be preferred.

            That comment was more directed at criticisms around the “G-word”.

            1. 2

              We shant speak of it :)

              1. -1

                Hiding behind such a promise isn’t much more than yet another excuse. Given all the past experiences, deciding to make such a promise–while still having known, glaring holes in the language–is just nuts.

                Anyway, it’s just the usual “we work at Google and are therefore by definition smarter than everyone else on this planet”.

                1. 3

                  In all fairness, it’s probably more along the lines of, “We built Unix, C, Plan 9, X, UTF-8, ed, memcached, Hotspot JBM, Chubby, Gearman, and have won turing awards, and are therefore by definition smarter than everyone else on the planet”.

                  1. 4

                    “We built Unix, C, Plan 9, X, UTF-8 …” decades ago, and decided to ignore any scientific and technological progress since then. Go fits perfectly into this point of view.

                    Not everyone gets smarter with age.

          2. 3

            Well, no, that isn’t the “fallback” of “all criticism” of Go. The response is that the Go authors have some set of trade offs that they want to make in the language, and it’s a good thing only if those trade offs are acceptable for your use cases.

        2. 2

          How do Go developers deal with this in the real world? It’s understandable not being able to support different versions of the same library in a single project, but not letting one specify which version should be used sounds like a big oversight.

          In the current production system I work on we require semantic versions. Dependencies are used across multiple systems and one makes an explicit decision to consume a new version. This means builds are always reproducible. And one always knows what’s going into their production build. And one can make backwards breaking changes and give other projects time to consume them.

          1. 4

            JFTR, that’s how SoundCloud does it: http://peter.bourgon.org/go-in-production/ (scroll a bit down for the “Dependency Management” section)

            They have extensive experience in using Go in production (AFAIK since H1 2012, maybe even earlier), so I would say the stuff they came up with is sound.

            1. 3

              My approach was to create a Makefile to build my project. It would:

              1. go get any dependencies
              2. git checkout to the dependencies tag/SHA that I indicated in the Makefile (since everyone used github, this wasn’t a problem)
              3. go install
              4. go build my project

              This isn’t hard for the 5-6 dependencies I had, and I have no concerns about it scaling - at least, so long as I didn’t have some git-based repos, some Mercurial, a handful of tarballs… but that doesn’t seem to happen in the Go world, so I didn’t worry.

              The biggest thing about Go & dependencies that worried me was that the quasi-official stance was: “you should always check out to master, and the library developer should always ensure master builds & never breaks backwards compatibility”. Do I trust the core language devs to get that right? Sure. Do I trust most library developers? Not a chance.

              That’s why I do think that a batteries-included approach would be better.

              1. 4

                and the library developer should always ensure master builds & never breaks backwards compatibility

                That’s a ridiculous stance to make anyways. All major versions should be new repos?

                1. 2

                  Yeah, that one really was a head scratcher. I think opinions on that have shifted since I last used Go, but I’m not sure.

                  ETA: Thinking back I vaguely remember two differing reasons on why that approach was taken. The first was “does it really matter, there’s so much churn right now that libraries come and go” which was understandable… packages were being born & dying before they really considered making a major release. The second was “the core language does it this way so we should too” which is a noble sentiment but likely unrealistic for libraries.

                  Any current Go users want to comment?

                  1. 1

                    If you’re concerned about versioning, then you can use a service like gopkg.in to map a URI to a branch in your repo.

                    I maintain several popular Go libraries and I follow the same practice as Go’s standard library: never introduce backwards incompatible changes.

                  2. 1

                    No. The beauty of using URIs for imports is that you can impose whatever scheme you want. There are several such services. One popular one is gopkg.in, which for example, lets you tie package URLs to specific tagged versions in your repository.

                    1. 1

                      It quickly becomes painful to encode a lot of meta data in a URI. Package managers generally evolve to include various constraints, like checksums. But restricting to just the major means builds are still not reproducible, as I commented above to @moses.

                      I think it’s also an odd choice to allow URLs in source files. That means making a project local involves making code changes or doing extra work elsewhere to make it clear that the URLs in source are now meaningless. In general, I believe package information in source is a failed experiment, having had to deal with it quite a bit in Erlang.

                      1. 1

                        But restricting to just the major means builds are still not reproducible

                        You misread my comment. I gave you an example. Restricting to major version is a feature of that service. It is not a requirement of the build tool. There are other services out there that let you put a sha or a tag name in the URI.

                        I think it’s also an odd choice to allow URLs in source files.

                        It’s one of the many things I love about Go.

                        In general, I believe package information in source is a failed experiment

                        That experiment is flourishing in the Go community.

                2. 2

                  Can you depend on a specific SHA? In that case, your biggest problem is if the entire project is deleted, or someone erases the main-line version of history and replaces it with another, and nobody has ever forked the project ever. That seems reasonably safe. You could even signal to people which SHAs are considered “stable”. You could also build semantic versioning on top of git shas, as project meta-data in the git master tip. Seems a little hacky, but definitely workable.

                  1. 2

                    In our build system we just use tags. And we are slowly moving over to gpg signed tags, only accepting tags from specific sets of developers.

                    For any external dependencies we actually mirror it to our local git repo first so it can disappear and we are OK.

                    1. 1

                      I’d be nervous about tags because they’re much less indelible than SHAs. It’s easy to just make a tag point to another SHA, whereas it’s a hassle to remove a SHA. Having a local repo makes it better, but I think that it’s easy to lose a local tag if the remote has deleted the tag.

                      The article also mentioned using tags, although he sounded not that jazzed about them.

                      1. 2

                        Unfortunately SHA’s have a horrible user interface. There is zero semantic information and that doesn’t scale very well. Might as well use git submodules in that case. Given that we mirror things locally and groups own specific repos, and we are moving towards signed tags (you know who redirected the tag), it’s a pleasant solution. It’s important to remember: your fellow developers might be dumb but they are not malicious. And if you’re on distributed version control, a backup exists on eery developers machine.

                        The OPs linked to the tag solution is actually rather poor. It only allows specifying the major version, which means builds are still not reproducible.

                        I agree with the authors point here:

                        Remember: everything about third-party code is a decision about trust. Waving the wand of “versioned dependencies” doesn’t change that. In fact, it might fool us into seeing guarantees that don’t exist.

                        The problem with the current state of affairs is it doesn’t work very well even with first party repos.

                  2. 1

                    It’s actually pretty simple to do in practice. I clone dependencies to our own archive, then everyone works from this “known good” set of library versions. When someone wants to update a library they can update and test it before committing back to our archive. It’s incredibly straightforward really.

                    1. 2

                      This does not suit the following use cases.

                      • If you need multiple versions accessible at the same time. For example service A uses version v1 and service B uses version v2. Each service having their own complete copy of deps is possible but rather frustrating given that source control tools already provide a mechanism to have multiple versions accessible.

                      • If you want reproducible builds. This can be a requirement from an external entity. But it’s also very handy for debugging. The current system makes git bisect difficult to use to track down errors.

                      • Knowing which version of libraries are in your release. If you always get whatever ‘master’ is, then between reviewing code and building a release you can get new commits, which is very confusing.

                      1. 3

                        Can you elaborate on your second bullet point? Normally I’d see reproducible builds being an argument for copying dependencies into your own source control, since depending on some external entity would take that decision out of your hands.

                        I don’t understand your third bullet point - it seems like if dependencies are checked into your source control, then you don’t have to “always get whatever master is” - you can just use the dependency at the version it is on your branch.

                        1. 3

                          I’m assuming that you are suggesting that one take every dependency had put it into a single monolithic repository.

                          That works for some organization, however in my experience it is antithetical to SOA.

                          With multiple services, often there is a foundation of common components and often these components are in their own repos. Sometimes these components have backwards breaking changes in versions and moving the entire organization in lockstep is costly. And if the foundation components are in other repos, reproducible builds are not possible unless you can specify the version to use.

                          One could put the foundation components into their services source tree, but that does not scale well, IME. It means code is duplicated all over the place and makes it harder to upgrade code when necessary. And version control tools already support the idea of versioning code so it seems unnecessary.