1. 55
    1. 10

      Embedding timezone data seems like a recipe for your binaries being out of date very very quickly. There are important user-visible changes to the database all the time: https://github.com/eggert/tz/commits/master

      1. 8

        I believe the implementation only uses the bundled tzdata when loading a time location from the system fails. So a Go program running on an up-to-date system should continue to work fine, as the bundled tzdata is just a fallback.

        1. 6

          But it will silently have bad behavior on a system without tzdata rather than failing in a way that will allow the operator to install tzdata through system package management - and get updates. If you’re somewhere with relatively stable timezones rules you might never notice that your users are getting bad timezones until they complain.

          1. 4

            You’ll still get updates through Go: just compile with the latest Go release. Most people do this already since it’s very compatible.

            1. 2

              But the timezone db changes daily or weekly. And your OS will pull those updates automatically while Go releases are much slower and you’d need to recompile and redeploy your binary.

              1. 7

                The tzdata package doesn’t release daily or weekly? 2020a is from a few days ago; 2019c is from September (and there were only 3 releases in 2019).

              2. 6

                Remember that this is opt-in, and only a fallback. If you prefer to not risk using out of date information, don’t use the tzdata package? Though then you have to ensure that your users/machines are fully up to date.

                1. 3

                  If I understand correctly, yes it’s technically opt-in – but there’s no easy way to opt-out if a library dependency opts-in nor a mechanism to discourage libraries from importing it? cf https://github.com/golang/go/issues/38679#issue-607112207

                  1. 2

                    Your libraries can do plenty of bad things already, though. If anything, embedding tzdata is harmless compared to some of the nasty stuff one could do in a library’s init, like modify std error variables or declare global flags.

                    I think the answer here, besides good docs, is vetting what dependencies you add to your module (and the amount of extra dependencies they bring in).

                    1. 1

                      That’s true and a fair perspective to take. I don’t care much personally, I was just trying to understand/clarify why some people resent this direction.

      2. 5

        Go prefers the system timezone database but will use the embedded data if it’s not available.
        #38017 explains the use cases that this change resolves.
        This change will mostly affect Unixlike systems without tzdata and Windows systems that don’t have Go installed.

    2. 6

      From the 17 page doc on the linker work:

      Shift work from the linker to the compiler. The linker is a bottleneck of every build graph, even for the smallest incremental change. Unlike the linker, many instances of the compiler can run in parallel or even be distributed.

      This is exciting! Linkers are mostly terrible and haven’t had significant changes in a long time - the compiler generates a lot of the data the linker could use, then throws it away, and at this point many of the historical responsibilities of the linker have been moved to the loader. I’m happy to see anyone in any language’s toolchain rework the linker.

      1. 2

        I agree - this work is very, very exciting! I briefly discussed it with Austin Clements last summer at GopherCon, and the document they published shortly after makes a very good summary of all the changes they intend to make.

        They knew it would take time to fully replace the old linker, but I’m pleasantly surprised that 1.15 already includes the new linker. It still behaves like the old linker in many ways, and there’s still lots to do, but it’s great progress.

        As much as I like projects like LLVM and how easy it is to implement languages on top of it, I also think that Go is taking full advantage of the fact that it has its own compiler and linker. They can carefully fine-tune both of them to the language, making incremental build times very fast.

    3. 5

      It seems that they are slowly but nicely improving the language, compiler, and runtime.

      Slide 49 mentions CPU feature detection. Does Go now have intrinsics such as SIMD intrinsics? Or is this still just to be used to select which function to run that was implemented in assembly?

      I haven’t really followed the generics discussion. Is there already an accepted proposal and an approximate ETA?

      1. 6

        I can’t answer either of your questions in detail, but I’ll try to give some pointers.

        The compiler does treat some standard library APIs as intrinsics, where possible. For example, here’s how it handles math.FMA on AMD64. You can see how it generates code to check for the feature at run-time.

        I don’t think the compiler is quite clever enough to do that kind of thing for hand-written code. It will usually work if you use pieces of the standard library that the compiler knows about like math and math/bits, but I don’t think it will magically convert uses of arrays and slices into SIMD instructions today.

        As for generics, it does seem like they’re still working on the prototype, but no ETA is guaranteed. I imagine there are other priorities at play, especially right now.

        1. 2

          Thank you for the extensive answer!

        2. 1

          I’ll try to give some pointers.

          Clever.

    4. 2

      It’s nice to see slow and steady progress on what seem to me to be “boring” engineering things like the standard library and performance. Out of curiosity, do Lobsters who use golang in anger find the balance picked by the golang team there good or bad?

      1. 14

        Despite working on PL tools and having a bunch of PL theory and Haskell nerds, my team has found Go to be a really productive tool in practice. My likes:

        • Opinionated, clear idioms. Code reviews waste less time on nits.
        • An acceptable, easy-to-explain type system. For a startup, the common alternative (since iteration speed is our constraint) is honestly Node.js or Ruby. (Our starting codebase is TypeScript-on-Node.js, which has its own set of glaring frustrations. In particular, the drift between NPM modules and their type declarations is a serious and unavoidable footgun.)
        • A lot of stuff just works out of the box (JSON parsing in the standard library, you can build an HTTP server with the standard library, etc.).
        • A native test runner. This is a huge win for junior engineers to not need to learn another library.
        • Not enough rope to hang yourself with. It’s more annoying to try to do the elegant thing, so people focus on getting it working first. This is great at helping junior engineers avoid rabbit holes, while senior engineers still have enough tools to build acceptable abstractions (you have to hold them the right way, but right is usually obvious) with acceptable developer overhead (annoying, but not devastating).

        My biggest complaints are the lack of sum types and write-a-for-loop-for-every-slice-type syndrome. Errors also take a while to get used to, but are not nearly as bad as they appear once your team has developed a sense of how to use them correctly (e.g. when do I wrap an error vs. use its message? Answer: use the technique that signals your intent for this specific abstraction).

        We also have a smaller, domain-specific tool that does build analysis that’s written in Haskell. The trade-offs between these languages are very stark. From an technology standpoint, I would argue that they fulfil different niches.

        Our programs in Haskell can express much more complex business logic much more clearly, in a big part thanks to algebraic data types (please, Robert Griesemer, all I want is sum types) and effect tracking (intended invariants are clearer, and are compiler-enforced). My biggest complaints here are:

        • Learning Haskell takes a specific interest and a long time. Any programmer can pick up Go in half a week. Our experience ramping Haskell engineers is about 3 months to trustworthy independent code review and 6 months to creating new projects. For their first month and a half, they spend a lot of time pairing.
        • There are too many ways to do things. Should we use an effect system? Which one? How should we handle records? Which pipeline library do we use? Servant seems cool, should we try it? For engineers just learning Haskell, this steepens the learning curve significantly.
        • Beginner-unfriendly documentation. It takes a while to grok types-as-documentation (empirically, about 4 months). Before then, selecting new libraries is very difficult for engineers new to Haskell because (1) they don’t understand what different APIs are possible and the trade-offs between them and (2) libraries often use language extensions or advanced techniques and they don’t understand the library’s source code.
        • Debugging experience is poor. When libraries use advanced language features, it’s often unclear from error messages whether you’re using the library wrong or the library has a bug. This is exacerbated by sparse documentation and difficult-to-read library source. This is a far, far cry from jump-to-definition/jump-to-implementation in Go.

        If going the Haskell route, make sure that your team has a lot of senior engineers who have significant experience in Haskell. Otherwise, your team will spend a lot of time figuring out the basics before they can become productive. The main benefit of Haskell is that an engineer who is ramped up on Haskell can jump into a large, complex Haskell codebase and almost immediately understand the core abstractions and invariants. These abstractions and invariants are also expressible in other languages, but are usually expressed through idiom and are less compiler-enforced.

        The biggest barrier to Haskell adoption here is that if your team already has that many senior engineers, they will likely be more productive in a language they can pick up quickly, and everyone can pick up Go.