1. 4

    I dislike these kinds of posts because instead of discussing effective uses of Go they discuss how to imitate language X in Go. That’s just not an appealing way to use a programming language.

    1. 6

      Many developers learning LISP and functional languages have said it changed how they think about some problems with their coding style picking up on that. Some people also imitate useful idioms to get their benefits. So, with no claim about this one, I think it’s always worth considering in general how one might expand a language’s capabilities.

      Double true if it has clean metaprogramming. :)

      1. 2

        I don’t entirely disagree. Maybe it’s just the quality of most of these posts that leave something to be desired.

      2. 6

        The goal of the post was to show how you would solve problems in Go that you would commonly use sum types for in other languages; not how to “get” sum types in Go.

        I agree that the first two approaches are trying to do imitate sum types, and there are disadvantages to that. But I would argue that using a vistor pattern is quite different, and is the “Go way” (as in it’s the only way that works harmoniously with the type system).

      1. 7

        This only covers a small fraction of the use case of sum types; namely, when there is a small set of standardized tasks that is shared across multiple types.

        You probably wouldn’t even use a sum type for this in Haskell or Rust; you would use a typeclass or a trait, which is basically what the author ended up doing in Go.

        By far the most useful feature of sum types (and further generalizations on multi-constructor types, like GADTs) is the exact representation of types with non-power-of-2 cardinalities. It’s hard to appreciate this if you’re used to working without it, but this single feature probably eliminates (conservatively) 60-70% of logic bugs I would make in languages like C or Java. I am not aware of any pattern or technique that satisfyingly reproduces this power in languages without native sum types.

        1. 3

          Could you give a simple example of that which a Go programmer might run into?

          1. 3

            The classic example is the null pointer. You want to represent either your data structure D or some special case representing absence or whatever. This has cardinality |D| + 1. The null pointer is the traditional way to express this, and it’s bad for obvious reasons.

            Second most straightforward example is you have two different data structures depending on the situation. Let’s say an error description or a success result. This has size |D| + |E|.

            Parsers are one of the most recognizable scenarios where you have types with weird sizes, corresponding to the various clauses of the grammar. This is, I believe, one of the primary things ADTs were invented for.

            One I ran into recently was representing a bunch of instructions in an ISA and their respective arguments.

          2. 2

            when there is a small set of standardized tasks that is shared across multiple types

            Isn’t this what interfaces are for?

            By far the most useful feature of sum types […] is the exact representation of types with non-power-of-2 cardinalities

            It would be great if you could provide an example of how this is useful.

          1. 2

            Java’s results are super surprising. I hold the JVM’s GC in extremely high regard, so I would love to see comments from someone who is more familiar with it’s implementation.

            1. 10

              Java is optimized for throughput, Go is optimized for latency. There is no free lunch.

              1. 3

                After reading into this more, it looks like the Java runtime has a number of GC algorithms available, and will use heuristics to pick one as the program runs. The goal of this is to allow it to perform well with either low latency or high throughput requirements.

                In the Java benchmark results listed in the blog post, one version lets the runtime decide which algorithm to use, and the other explicitly uses the G1 collector. After reading the HotSpot docs, it looks like the concurrent mark and sweep (similar to Go’s) GC might perform well with low latency requirements.

              2. 7

                The reddit user jcipar managed to get the max pause down to 22ms by tweaking parameters.

                He also mentioned that the JVM GC does a lot of online tuning, so the max pause times may drop over a longer run of the program. This is similar to the Racket GC, where the maximum pauses are >100ms at the start of the run, but converge to around 20ms as the program continues to run.

                It would be nice to run the benchmarks for a longer period of time, and only measure max pause times once this “ramp up” period is over.

                1. 1

                  Ya - I was going to say. The magic of Java (and .NET actually) is that they’re much better given long run times with their Server GC’s. I’d like to see the benchmarks over the course of a day or even a week.

                2. 4

                  Gil Tene suggests a part of this is the lack of compaction in Go

                  .@jamie_allen Go’s (current) collectors don’t compact. Different problem space. Not compacting in Java mean not running very long.

                  1. 2

                    I wonder how they deal with heap fragmentation in that case?

                    1. 1

                      This makes sense at first blush. Java is pointer-mad.