1. 11
    1. 14

      I/O in Go is not buffered by default (explicit is better than implicit), so fmt.Println calls fmt.Fprintln(os.Stdout, ...) where os.Stdout is unbuffered. This is fine for interactive programs with small amounts of output, but for any performance-oriented filter you want to use fmt.Fprintln(buf, ...), where buf is a bufio.Writer which wraps os.Stdout. You’ll see at least a 10x performance improvement if the bottleneck is writing output. See for example the commit where I fixed this in GoAWK.

      Edit: The author does hint at this in his post, “I suspect that if the buffer fmt.Println is writing to wasn’t flushed so often there could be a dramatic increase in performance.” And thanks to Go’s excellent I/O interfaces, it’s only a two-line fix to do so.

      1. 3

        Thanks for the tip Ben. I just did a quick benchmark with buffering and it was 20% faster on the 1 million-line file. I’ll do a run later today and see what the impact is on the 1.27B dataset.

        1. 5

          Update: bufio took 22 minutes off the 71-minute run time. I’ve updated the post with all the details.

    2. 3

      no_tld := strings.TrimRight(record.Value, suffix)

      strings.TrimRight trims a cutset. This is almost certainly a bug, and should be strings.TrimSuffix.

      1. 2

        This is petty, but

        module rdns/main

        should be

        module marklit82/rdns

        “Main” is the name of the package he’s working on. The module is rdns. I think it’s unfortunate that Russ Cox chose the name “module” for a group of packages, and I tried to talk him out of it on Reddit, but it is what it is, and you should work with the conventions of the language. In Go, the “module” is a namespace for your project. A “package” is one directory within a project.

        1. 2

          This version of the code calls publicsuffix.Update(), which makes an HTTP request to get the latest public suffix list off of Github. That is an acceptable design decision, but it makes the results incomparable with the Rust and Python versions, which AFAICT just use a cached PSL list.

      2. 1

        Thanks for the feedback. I’m working my way through these comments and will do another run hopefully later today.

        I did swap out TrimRight for TrimSuffix but instead of getting the same output the hostname came back completely blank. The 22 GB of output produced is the same size as what the Rust version generated so I’m pretty sure I’m getting roughly the same output (though the TLD parsers may have unique edge cases).

        1. 3

          I don’t know what to tell you, but TrimRight is definitely wrong. It only seems like it works if dot is not in the cut set.


          My guess is that it trims “blog.golang.org” to “blog.” and then you split it into “blog” and “” and take the last one. It would make more sense to TrimSuffix(s, “.”) and then instead of doing a Split (which does too much work and generates garbage) do

          if i := strings.LastIndexByte(s, '.'); i != -1 {
            s = s[i+1:]


    3. 3

      One of my big hopes with generics in 1.18 is JSON encoding / decoding gets faster because we can use concrete types and not use reflection (obviously this will take time as the stdlib will be mostly generics-free for some time).

      The encoding/json performance problems in the go standard library are a major issue, not insurmountable though.

      My suggestion to the author is to try and non-stdlib / optimized JSON unmarshaler.

      1. 3

        That’s not going to happen. The way generics work doesn’t allow for compile time execution (or even specialization, although that will probably happen eventually), so there’s no way serialization will ever work with Go generics. For the foreseeable future, if you want to use a concrete type when serializing, it will need to be go:generated.

      2. 3

        Yes, it’s likely that JSON unmarshaling is a performance bottleneck here. For “big data” JSON filtering, this is a real concern. I saw https://github.com/bytedance/sonic just the other day, and I know there are other performance-focused JSON libs.

        That said, I think it’s a stretch to say the “the encoding/json performance problems in the go standard library are a major issue”. For many common use cases like web applications it just doesn’t matter much. I’ve used Go’s encoding/json extensively for small servers as well as reasonably high-throughput servers (10s or 100s of requests per second, not thousands or millions) and it works fine – database query performance was usually the bottleneck.

        1. 2

          The share of issue depends on how much your program is doing in each area; if your program is just processing json, and the json library is slow, it’s an issue (as in, this case).

          The other piece of context here is how the performance is “relatively” to other runtimes; here is where the standard library implementation suffers the most, as compared to other common runtimes it’s significantly slower.

          1. 2

            What other common runtimes are significantly faster than Go, and in which dimensions? As far as I’m aware, Go’s runtime, which is optimized for latency, is as fast or faster than anything on the market, but I’d be happy to see data suggesting otherwise!

            1. 2

              Sorry should have been clearer, *runtime standard library json packages. The terminology (“runtime”) there was used poorly.

              1. 1

                Gotcha! Thanks.