1. 5

    oh this is weird to find a post about me at my internet watering hole.

    well shucks

    1. 11

      Lots of other things to comment on but I firmly think interfaces-as-structural-types is one of Go’s greatest strengths. To put it in the “bad” group is upsetting and kind of made me discount the rest of the article.

      1. 6

        I think it’s a philosophical difference:

        Some developers write code to document to coworkers what they are doing, and some developers just want things to compile with the least effort possible.

        1. 2

          It’s not a matter of least effort. Go is one of the first languages I know of that was primarily designed for large teams of engineers instead of individuals. Heavy focus on compile time, gofmt, not allowing compilation with unused variables, etc, all directly stem from this approach. Structural typing specifically reduces cross-team friction. Go will often make decisions that incur individual overhead to reduce overall team overhead

          1. 5

            Not sure I agree on this.

            Compilation is not particularly fast, even compared to more more modern languages like Algol, its dependency management is a disaster, it’s error handling ignores the last few decades of lessons learned and the amount of code duplication it forces upon developers makes it hard to maintain.

            I think it does well in terms of helping Google’s requirements of having all code in a large mono-repo, and enabling people who have no practical experience to produce code.

            1. 2

              Whether or not they succeeded at being fast wasn’t my point (though my position is they did succeed). My point is the kinds of things they emphasized in language design. Russ Cox argues that compilation speed is one of the reasons they don’t have generics, for instance.

              Dependency management doesn’t matter with large teams in a mono repo, yeah, and the code duplication to me felt like it would be an enormous issue when I started but in practice, half a million lines of code later, it doesn’t come up nearly as much as you’d think.

              1. 2

                Compilation doesn’t have to be slow just because generics are involved, the author of D demonstrated that fairly well. I think this is rather an issue of generics not having been invented at Bell Labs (or decent error handling in this regard).

                I’m not sure why “dependency management doesn’t matter if you are a Google employee” should be a convincing argument for programming-in-the-large for the millions of non-Googlers out there.

            2. 2

              Structural typing specifically reduces cross-team friction.

              Can you talk about how structural typing accomplishes this?

              EDIT: Ah, I see you answered this in another thread.

          2. 3

            Languages are funny. I’d consider defer to be a bad idea elevated to a language feature, but it’s in the “good” group 😀

            1. 2

              Can you explain why you like this idea?

              1. 4

                Sure! Let’s say someone writes a library you use frequently but writes it such that it does not explicitly implement any interfaces, as the author of the above post prefers. Maybe you use a library called Beep like this:

                type Beeper1 struct { ... }
                func NewBeeper1() *Beeper1 { ... }
                func (b *Beeper1) Beep() { ... }

                You are writing your library, but want to support multiple implementations of Beepers. Maybe there’s another beeper (for a test, or another library, or something else) that also has a Beep() method. So you write your code to just expect

                type Beeper interface {

                Now you can use the third party code, your code, your test code, etc, without having to change the third party code upstream to implement your interface.

                This is a super contrived example, but as your codebase and team grows larger, this becomes incredibly useful for reducing friction in having teams of engineers work together with minimal stepping on each other’s toes.

                Ultimately, I describe Go’s structural typing system to Python programmers like the static typing equivalent of Python’s “duck typing” principle, which is, if it looks like a duck and quacks like a duck, just treat it like a duck. Coming from statically typed languages that require you to list what interfaces a concrete instance implement, Go not requiring that dance felt like a huge reduction in friction to me.

                1. 2

                  I guess, to me, it feels like a strictly worse approach than what Rust has with traits, or Haskell with typeclasses, because there’s no explicit guarantee that a “Beeper” is actually abiding by the contract of the “Beeper” interface. It could have a “Beep” method that actually nukes Vienna. There’s friction to implementing a trait or typeclass for a new type, but there’s also value in it. If I have explicitly implemented a trait, there’s documentation of the type’s usage in that context as well as of its conformance with the interface.

                  1. 3

                    A frequent pattern in Go to get some of that functionality if you want it is to write something like

                    var _ InterfaceName = (*ConcreteType)(nil)

                    which simply adds a compile time assertion that ConcreteType does indeed implement InterfaceName

                    Certainly does nothing to constrain the behavior, but I’m super happy with that (optional) middle ground

                    1. 3

                      There exists a spectrum: let’s say that on one extreme, it’s maximum programmer friction with minimum risk of mis-use; and on the other extreme, minimum programmer friction with maximum risk of mis-use. Go puts a marker down closer to the latter extreme, judging friction to be a worse evil than risk for their context, and providing some affordances (like the one-liner jtolds mentions) to mitigate some of those risks via convention.

                      I think no position on the spectrum is “strictly worse” than any other. It is a question of trade-offs to satisfy particular programming contexts or requirements. I think this is the same for any technical decisionmaking.

                      Go makes a lot of decisions this way. (Not all, and there are warts for sure — but many.) I think it is a nice and refreshing change from where most languages (like Rust) decide to land, and I think Go’s success in the market proves it is a viable, or maybe even preferable, compromise-point for many users.

              1. 5

                A bit of shameless self-promotion, but I’m really proud of a library some friends at my company made called DBX: https://github.com/spacemonkeygo/dbx/

                It’s like SQLAlchemy, but completely type-safe and determined at compile-time. Many other Go ORMs aren’t quite as type-safe and do a lot of runtime reflection. DBX takes a schema upfront and generates multiple database backends and Go code to interact with your database in a type-safe way.

                It’s neat, check it out!

                1. 3

                  Scott Hanselman just put together a great list (and did so for the previous two years also): https://www.hanselman.com/blog/The2017ChristmasListOfBestSTEMToysForKids.aspx

                    1. 1

                      @jtolds on previous submission: > Okay so for what it’s worth, logging in is actually required so we can measure how long it takes from the first time you read the problem until you solve it

                      You need to say that somewhere in really big letters. I opened a problem because I was curious whether the problems were random little puzzles or closely tied to the sponsoring business. It looks like a five or ten minute puzzle, but I started making a cup of tea, planning a restaurant reservation, reading an interesting twitter thread, listening to a podcast…

                      1. 1

                        Since we’ve just enabled it such that anyone can view problems without logging in, we won’t be scoring by that criteria for any reason.

                        You’re right, that should have been more prominent

                      1. 2

                        There’s no content here without signing up and giving an email address. It looks like it might be on topic, but it’s just lead gen.

                        1. 1

                          Okay so for what it’s worth, logging in is actually required so we can measure how long it takes from the first time you read the problem until you solve it, but you’re right, this looks spammy. I’ll change it so it doesn’t require a log in.

                          1. 1


                        1. 4

                          To be honest, Go is a good language for just getting things done, but for something designed to be a server language, it has horrible runtime debuggability, introspection, and tracing features.

                          1. 4

                            I agree. The authors’ cultural biases and particular needs shaped very deeply the implementation. There are some people (myself included) who have worked on this somewhat, but it’s very hard to add it after the fact, so to speak.

                            1. 4

                              We wrote MonKit (https://github.com/spacemonkeygo/monkit) to add a lot of these features. Our Go programs are some of the best programs I’ve ever worked on for tracing, introspection, and debuggability. Certainly we’re still missing some things, but overall it’s a fantastic experience.

                              1. 4

                                It may not have Java’s remote debuggability but pprof exposed via an endpoint can often get you pretty close to what you need.

                                Although I’m probably a bit biased here since I almost never debug using an interactive debugger. I’m more of a printf and read the code kind of guy.

                                1. 3

                                  I’m talking more about tapping into calls on a running system and running some stats and timing, maybe something like dtrace.

                                  I can’t see a way to answer questions like “How many times is this function being called? how many errors am I getting here?” without recompiling.

                                  I would love if Go provided a way for me to split the tracing and timing code out of my business logic.

                                  1. 3

                                    I’m not a Go fan, but if you’re on a platform with DTrace (e.g. SmartOS), I believe you can just use the pid provider to trace Go function calls. Because of the needlessly different calling convention you’ll probably need to learn about the stack structure to get arguments and return values, but it’s a start!

                                    1. 11

                                      Having written both the Solaris port and the SPARC port I use DTrace with Go all the time. It’s a massive PITA, but still better than not being to use DTrace.

                                      Getting arguments is pretty simple.

                                      #if defined(__sparcv9)
                                      #define goarg0 *(long long*)copyin(uregs[R_O6]+0x7ff+176+0, 8)
                                      #define goarg1 *(long long*)copyin(uregs[R_O6]+0x7ff+176+8, 8)
                                      #define goarg2 *(long long*)copyin(uregs[R_O6]+0x7ff+176+16, 8)
                                      #elif defined(__amd64)
                                      #define goarg0 *(long long*)copyin(uregs[R_RSP]+8+0, 8)
                                      #define goarg1 *(long long*)copyin(uregs[R_RSP]+8+8, 8)
                                      #define goarg2 *(long long*)copyin(uregs[R_RSP]+8+16, 8)

                                      There are other problems, however.

                                      The main problem is the lack of debugging information. Because of the way Go binaries are built it’s impossible to add CTF to Go binaries the regular way you’d do in C. So I added CTF support directly in the Go linker. However, I quickly learned that virtually all Go binaries exceed the CTF limits for a single object file (number of symbols and so on). You’d have to split, at least in conceptually, in memory, Go object files into multiple object files so they’d fit into CTF limits.

                                      This is very, very difficult to do under the current toolchain implementation. Truth be told, if CTF were more widely used I’d do it, but I bet the number of people who run DTrace or mdb(1) on Go binaries can be counted on fingers, and I am probably the largest user, and I have my own workarounds.

                                      There is the second aspect of this story, and that is that on Oracle Solaris, DTrace and mdb(1) got native DWARF support (which the Go linker has generated for a very long time). It turns out that if you restrict the DWARF in the binaries to only the subset that can be expressed in CTF, it’s not actually any larger than CTF. I suggest illumos and FreeBSD move to the same technique, as DWARF is ubiquitous.

                                      Apart from the lack of debugging information, the biggest problem is that in DTrace there is no language-level concept of goroutines. For example, even if you can easily extract a pointer to some goroutine, you can’t get a stack trace for that goroutine easily. This is solvable with DTrace providers, but I have not yet written one.

                                      There are also more minor nuissances like the fact that the set of expressable identifiers in DTrace and mdb(1) is smaller than the set of identifiers expressable in Go. In particular, you can’t refer directly to most Go identifiers in DTrace or mdb(1). Usually when I need this I patch the linker to implement some mangling scheme or use the symbol address.

                                      The good news is that I do have plans to improve support for DTrace in Go, like writing a provider and so on. The bad news is that there is no ETA.

                                      Oh, and by the way:

                                      needlessly different calling convention

                                      The calling convention is not needlessly different. It has been different for very specific technical reasons and it’s obvious from our previous discussions that you, and in general Joyent, has an anti-Go-implementation bias here.

                                      There are some not-very-concrete plans put forwards by some people to change it to make it more compatible with the System V ABI (it can never be fully compatible), but it’s actually rather difficult to do and the priorities have been elsewhere. Of all the people who work on Go I am probably the one who recognizes the need for this the most, so there is some rather good chance that I might tackle this some day, but as always, there have been other priorities.

                                      1. 2

                                        it’s obvious from our previous discussions that you, and in general Joyent, has an anti-Go-implementation bias here

                                        Calling my technical opinion a “bias” feels like a rhetorical technique to suggest that I couldn’t possibly have a legitimate criticism. Additionally, though I work at Joyent (even with Keith, now retired), I don’t think it’s fair to characterise my opinion of Go as the position of an entire company.

                                        We are a broad church and have quite a few engineering staff presently experimenting with Go, some of whom are attempting to actively engage with the Go community on illumos-specific matters. At least one of us is working on a new mdb module for Go, and you can see a bunch of recent activity in our Triton/Manta SDK for Go. We’re even working on an evolving set of engineering practices specifically for writing software in Go.

                                        I bet the number of people who run DTrace or mdb(1) on Go binaries can be counted on fingers, and I am probably the largest user

                                        That may have been true in the past, but I don’t imagine it will be true in the future!

                              1. 3

                                tl;dr: in the long run, everyone has their moments of wealth and poverty, but at any given time, the distribution of wealth is exponential.

                                1. 8

                                  Within that experiment, sure. Unfortunately in the real world, having more money allows you to acquire more money much easier (e.g., not having to pay the bank interest for loans, actually having money you can invest, etc.), so there’s a bit of a runaway problem where fairness won’t necessarily come in the end without intervention.

                                  1. 1

                                    Practical forms of “intervention” include exponential dilution via children, marriages, etc.

                                    You can get around this with e.g. primogeniture but this is evolutionarily sub-optimal so most people don’t. Even the oldest banking families around today aren’t really all that old on a historical scale.

                                    1. 2

                                      I can see this could prevent long-lasting dynasties but when few hold money it’s still bad for the economy even if it was just for a single generation. The poor spend a much larger portion of their income than wealthy even in contrived environments. I do think the property rights (yes unlike the founding fathers I do view property as a right) of the few do not outweigh the rights of life liberty and the pursuit of happiness. If the income inequality is so extreme that those basic rights are infringed I think it is a responsibility of the government to normalize that effect.

                                1. 8

                                  This reminds me of the problem Mitzenmacher’s “power of two choices” load balancing scheme solves [1]. The problem is that basically, if you throw n balls into n bins completely at random, some bins will have no balls while some bins will have more than one. Ultimately, the difference between the bin with the least amount of balls and the bin with the most goes up with log(n)/log(log(n)). This can actually become a pretty significant effect with large n.

                                  So, yeah, if you simply distribute money randomly, the difference between the people with the most money and the people with the least money won’t be very well load balanced. What’s more, in the case of this thought experiment, there’s an additional effect which is that some rounds, some players are out of money and can’t redistribute, which reduces the overall flow of money that round.

                                  Mitzenmacher’s power of two choices load balancing scheme is to randomly choose two bins, then place the next ball into the bin with the lesser amount of existing balls. With this strategy, the maximum skew is more like log(log(n))/log(2), which is way better distributed.

                                  I have no idea on how that load balancing observation could be used to create better wealth inequality policy.

                                  [1] https://www.eecs.harvard.edu/~michaelm/postscripts/mythesis.pdf

                                  1. 1

                                    I have no idea on how that load balancing observation could be used to create better wealth inequality policy.

                                    Well, it would seem that a random distribution creates clumps and hidden money (all money past 1$ is “hidden”).

                                    So, a system where everyone gets the money would be a better way. Like UBI. The payment of goods and services may still be skewed, but UBI would not lock people out of it. The other half would be a progressive tax to lower overall effectiveness of higher end and rebalance the monetary system.

                                  1. 7

                                    Ugh, I added a new disclaimer to the top of the article I’m linked to from this list (update 2 on http://www.jtolds.com/writing/2016/03/go-channels-are-bad-and-you-should-feel-bad/)

                                    1. 2

                                      The math is a bit strange:

                                      • 2 billion lines of code
                                      • Over a billion files
                                      • 35 million commits

                                      So this can’t all be code, right? Less than 2 lines per file and maybe 285 files per commit?

                                      1. 4

                                        The full quote “The Google codebase includes approximately one billion files and has a history of approximately 35 million commits spanning Google’s entire 18-year existence. The repository contains 86TBa of data, including approximately two billion lines of code in nine million unique source files.”.

                                        So actually about 222 lines per unique source file, and perhaps about 4 commits per file.

                                        The linked article goes on to say that the over 1 billion number comes from the inclusion of “source files copied into release branches, files that are deleted at the latest revision, configuration files, documentation, and supporting data files”

                                        1. 1

                                          I remember reading an article that talked about how there are a lot of automated processes also committing things to their repo besides developers, so if that’s true I assume many of the files are artifacts of some kind.

                                          1. 3

                                            Plus there are probably plenty of non-code files like images as well.

                                        1. 2

                                          Errors & best practices section reminded me of Rust’s error-chain. It seems clear to me something like this should be in the standard library, both for Go and for Rust.

                                          1. 1

                                            Yeah, I really am missing ways to split the actual error i want to signal to users on my server (without leaking internal info) from debug info like stack traces.

                                            1. 2

                                              you might try the github.com/spacemonkeygo/errors package, which has rich support for attaching stack traces and at the same time keeping the error representation separate (for instance, see github.com/spacemonkeygo/errhttp)

                                          1. 2

                                            On the flipside, React Native has been simply wonderful.

                                            1. 5

                                              I love Golang, not least because of the standardization via the likes of gofmt; it makes any codebase very accessible.

                                              However, to contradict myself regarding standardization, what I dislike is the GOPATH hassle; it’s franky a PITA to have all my Go code structured in a place away from all my other repos. For years I had a directory with all my repos underneath it which made pathing easy, but with Go I need to maintain the whole Go source tree hassle, e.g.

                                              $ tree Code/Go/src/ -L 1
                                              ├── 9fans.net
                                              ├── bitbucket.org
                                              ├── code.google.com
                                              ├── github.com
                                              ├── golang.org
                                              ├── gopkg.in
                                              └── honnef.co

                                              This makes navigating around the filesystem to work on files super cumbersome. Surely there must be a way of using GOPATH to find all the libraries Go needs for compilation, without making the programmer adhere to this structure.

                                              It’s about the only thing I can think of that I dislike; unfortunately it’s jarring since it’s something a new Gopher discovers right at the beginning of trying out the language. Anecdotally, I’ve seen more than one programmer walk away from the language right at this stage.

                                              1. 2

                                                I gotta say, my happiness with the GOPATH crap shot way up as soon as I configured my .bashrc to always set GOPATH to PWD. Now, my go incantations always assume the current directory is the root of the GOPATH and oh man, so good.

                                                ~ $ cd $(mktemp -d)
                                                /tmp/tmp.65pcSFeA1s $ go get -v github.com/whatever


                                                ~ $ cd whatever/project
                                                ~/whatever/project $ ls
                                                ~/whatever/project $ go install .../name
                                                ~/whatever/project $ ls
                                                bin pkg src
                                                1. 1

                                                  For the curious:

                                                  prompt_setup() {
                                                    export GOPATH=$PWD
                                                  1. 1

                                                    Interesting; thank you for sharing that. I’ll play with this setup and see what you’re getting at.

                                                    Much appreciated! :-)

                                                1. 6

                                                  Vala. My favorite pet project is in Vala so I’ll start with it.

                                                  1. My biggest complaint about Vala is I think it’s dying. As a single person I can’t stop it from dying. No matter how much code I turn out, as an individual there’s nothing I feel I can do to stop this. One of its best contributors (Luca Bruno) recently left the project. The creator never really seems to contribute as far as I can tell past issuing releases.
                                                  2. There’s also a complete lack of tooling. No good IDEs. It’s a second class language even in Gnome-centric IDEs and Gnome is supposed to be the primary use of the language. Debugging of any kind, IMO, is a PITA mainly due to the fact it compiles to C before going to GCC.
                                                  3. I’ve never honestly been a huge fan of languages trans-compiled to another language before being actually compiled. I think this causes more headaches than it’s worth and Vala absolutely demonstrates that with the insane number of compilation errors and warnings that GCC spits out.

                                                  Python. At work I use Python.

                                                  1. The fight between 2 and 3 drives me nuts. 99% of stackoverflow is examples written in 2, which isn’t a huge problem in that I can almost always translate them, but it’s still a PITA.
                                                  2. The GIL naturally.
                                                  3. I don’t see a lot of complaints anymore floating around the net, but I think python packaging is a disaster. I have no idea what the difference between pip and pip3 is?? I have to use some packages that supposedly work with 2 and 3 but end up only working with one or the other.
                                                  1. 2

                                                    Vala is awesome

                                                  1. 5

                                                    I’ve used it a few times. I think the main issue that prevents me from using it more is that when I go to reach for it, I’m usually trying to send private key material to another developer, and then remember that GPG transferred stuff doesn’t have forward secrecy. If keybase.io had a way to transfer information to another user with forward secrecy, that’d be so sweet.

                                                    1. 1

                                                      post-it and fire, still undisputed champ of secret distribution

                                                      1. 1

                                                        Well, KBFS doesn’t have forward secrecy, but the saltpack standard does - see the ephemeral keypair generation in the header section. So you can’t have FS inside the keybase filesystem, but for individual files encrypted with the keybase CLI you can.

                                                      1. 4

                                                        I have the game installed (Android) and Google (https://security.google.com/settings/security/permissions?pli=1) doesn’t seem to think I’ve granted Niantic or Pokemon go any permissions, fwiw.

                                                        the Chromecast app does evidently have full access, which i guess I’m okay with since it’s Google anyways?

                                                        1. 2

                                                          I think the author mentions that it only happens on ios.

                                                          1. 2

                                                            I believe the OP refers to the application having full access to your Google account (since they logged in with Google), not system permissions.

                                                            1. 4

                                                              that is also what i mean

                                                          1. 1

                                                            Why have the deferred function return func(*error) func(*context.Context) instead of func(*error, *context.Context)? I thought the former would make the call evaluate the error argument immediately?

                                                            1. 1

                                                              it returns func(*context.Context) func(*error) though. we want to set up the context before function execution, and capture the error at the end

                                                            1. 1

                                                              Is there any documentation on using such a library with something like statsd? One of the major selling points to me for using Codahale’s Metrics was the various statsd reporter implementations.

                                                              1. 1

                                                                The way we get the data to our graphite collector is unfortunately unnecessarily convoluted for most users, so that plugin hasn’t been released.

                                                                Nonetheless, if you skipped our internal system, getting all of the key/value data into a timeseries database is quite straightforward, on the order of something like:

                                                                  monkit.Stats(func(name string, val float64) {
                                                                      // send name, val to graphite with a timestamp

                                                                It’s a good point that a simple wrapper should be built in. If you beat me to a commit that adds it I’ll merge it!

                                                                Worth pointing out that statsd usually does data aggregation (max/min/etc), whereas you might be able to use monkit directly to your time series database without statsd since monkit does the aggregation in process.