Java’s results are super surprising. I hold the JVM’s GC in extremely high regard, so I would love to see comments from someone who is more familiar with it’s implementation.
Java is optimized for throughput, Go is optimized for latency. There is no free lunch.
After reading into this more, it looks like the Java runtime has a number of GC algorithms available, and will use heuristics to pick one as the program runs. The goal of this is to allow it to perform well with either low latency or high throughput requirements.
In the Java benchmark results listed in the blog post, one version lets the runtime decide which algorithm to use, and the other explicitly uses the G1 collector. After reading the HotSpot docs, it looks like the concurrent mark and sweep (similar to Go’s) GC might perform well with low latency requirements.
The reddit user jcipar managed to get the max pause down to 22ms by tweaking parameters.
He also mentioned that the JVM GC does a lot of online tuning, so the max pause times may drop over a longer run of the program. This is similar to the Racket GC, where the maximum pauses are >100ms at the start of the run, but converge to around 20ms as the program continues to run.
It would be nice to run the benchmarks for a longer period of time, and only measure max pause times once this “ramp up” period is over.
Ya - I was going to say. The magic of Java (and .NET actually) is that they’re much better given long run times with their Server GC’s. I’d like to see the benchmarks over the course of a day or even a week.
Gil Tene suggests a part of this is the lack of compaction in Go
.@jamie_allen Go’s (current) collectors don’t compact. Different problem space. Not compacting in Java mean not running very long.
I wonder how they deal with heap fragmentation in that case?
This makes sense at first blush. Java is pointer-mad.